Test Report: KVM_Linux_crio 18966

                    
                      6c595620fab5adb75898ef5927d180f0ecb72463:2024-05-28:34666
                    
                

Test fail (31/312)

Order failed test Duration
30 TestAddons/parallel/Ingress 155.51
32 TestAddons/parallel/MetricsServer 340.58
45 TestAddons/StoppedEnableDisable 154.36
47 TestCertExpiration 1145.6
164 TestMultiControlPlane/serial/StopSecondaryNode 141.8
166 TestMultiControlPlane/serial/RestartSecondaryNode 60.53
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 362.24
171 TestMultiControlPlane/serial/StopCluster 141.69
231 TestMultiNode/serial/RestartKeepsNodes 304.78
233 TestMultiNode/serial/StopMultiNode 141.36
240 TestPreload 250.89
248 TestKubernetesUpgrade 387.73
282 TestPause/serial/SecondStartNoReconfiguration 63.32
320 TestStartStop/group/old-k8s-version/serial/FirstStart 271.2
338 TestStartStop/group/no-preload/serial/Stop 139.05
340 TestStartStop/group/embed-certs/serial/Stop 139.13
341 TestStartStop/group/old-k8s-version/serial/DeployApp 0.46
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 98.53
343 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.39
344 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
349 TestStartStop/group/old-k8s-version/serial/SecondStart 737.38
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.14
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
357 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.26
358 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.47
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 541.62
360 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 432.17
361 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 335.82
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.06
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 147.73
375 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 238.6
x
+
TestAddons/parallel/Ingress (155.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-307023 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-307023 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-307023 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [827204ae-e6ad-4624-87ec-f215a8cd56dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [827204ae-e6ad-4624-87ec-f215a8cd56dd] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003164194s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-307023 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.893148638s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-307023 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.230
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-307023 addons disable ingress-dns --alsologtostderr -v=1: (1.680960638s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-307023 addons disable ingress --alsologtostderr -v=1: (7.960173662s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-307023 -n addons-307023
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-307023 logs -n 25: (1.269531312s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-984992 | jenkins | v1.33.1 | 28 May 24 20:22 UTC |                     |
	|         | -p download-only-984992                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:22 UTC |
	| delete  | -p download-only-984992                                                                     | download-only-984992 | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:22 UTC |
	| delete  | -p download-only-610519                                                                     | download-only-610519 | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:22 UTC |
	| delete  | -p download-only-984992                                                                     | download-only-984992 | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:22 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-408531 | jenkins | v1.33.1 | 28 May 24 20:22 UTC |                     |
	|         | binary-mirror-408531                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38549                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-408531                                                                     | binary-mirror-408531 | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:22 UTC |
	| addons  | enable dashboard -p                                                                         | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:22 UTC |                     |
	|         | addons-307023                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:22 UTC |                     |
	|         | addons-307023                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-307023 --wait=true                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:24 UTC | 28 May 24 20:24 UTC |
	|         | -p addons-307023                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:24 UTC | 28 May 24 20:24 UTC |
	|         | -p addons-307023                                                                            |                      |         |         |                     |                     |
	| addons  | addons-307023 addons disable                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:24 UTC | 28 May 24 20:24 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC | 28 May 24 20:25 UTC |
	|         | addons-307023                                                                               |                      |         |         |                     |                     |
	| ip      | addons-307023 ip                                                                            | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC | 28 May 24 20:25 UTC |
	| addons  | addons-307023 addons disable                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC | 28 May 24 20:25 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC | 28 May 24 20:25 UTC |
	|         | addons-307023                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-307023 ssh cat                                                                       | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC | 28 May 24 20:25 UTC |
	|         | /opt/local-path-provisioner/pvc-ea111a43-617c-4baa-a9fd-5cb0ed5a97d7_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-307023 addons disable                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC | 28 May 24 20:25 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-307023 ssh curl -s                                                                   | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-307023 addons                                                                        | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:26 UTC | 28 May 24 20:26 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-307023 addons                                                                        | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:26 UTC | 28 May 24 20:26 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-307023 ip                                                                            | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:27 UTC | 28 May 24 20:27 UTC |
	| addons  | addons-307023 addons disable                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:27 UTC | 28 May 24 20:27 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-307023 addons disable                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:27 UTC | 28 May 24 20:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 20:22:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 20:22:16.299687   12512 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:22:16.299944   12512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:22:16.299957   12512 out.go:304] Setting ErrFile to fd 2...
	I0528 20:22:16.299961   12512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:22:16.300139   12512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:22:16.300709   12512 out.go:298] Setting JSON to false
	I0528 20:22:16.301564   12512 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":279,"bootTime":1716927457,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 20:22:16.301624   12512 start.go:139] virtualization: kvm guest
	I0528 20:22:16.303632   12512 out.go:177] * [addons-307023] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 20:22:16.304886   12512 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 20:22:16.306161   12512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 20:22:16.304858   12512 notify.go:220] Checking for updates...
	I0528 20:22:16.308339   12512 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:22:16.309433   12512 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:22:16.310627   12512 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 20:22:16.311976   12512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 20:22:16.313300   12512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 20:22:16.344364   12512 out.go:177] * Using the kvm2 driver based on user configuration
	I0528 20:22:16.345582   12512 start.go:297] selected driver: kvm2
	I0528 20:22:16.345593   12512 start.go:901] validating driver "kvm2" against <nil>
	I0528 20:22:16.345605   12512 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 20:22:16.346303   12512 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:22:16.346375   12512 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 20:22:16.360089   12512 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 20:22:16.360134   12512 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 20:22:16.360341   12512 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:22:16.360406   12512 cni.go:84] Creating CNI manager for ""
	I0528 20:22:16.360422   12512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 20:22:16.360433   12512 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0528 20:22:16.360490   12512 start.go:340] cluster config:
	{Name:addons-307023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-307023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:22:16.360586   12512 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:22:16.362213   12512 out.go:177] * Starting "addons-307023" primary control-plane node in "addons-307023" cluster
	I0528 20:22:16.363256   12512 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:22:16.363286   12512 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 20:22:16.363292   12512 cache.go:56] Caching tarball of preloaded images
	I0528 20:22:16.363365   12512 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 20:22:16.363376   12512 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 20:22:16.363642   12512 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/config.json ...
	I0528 20:22:16.363659   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/config.json: {Name:mk9bcf9f72796568cf263ac6c092a3172b864dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:16.363774   12512 start.go:360] acquireMachinesLock for addons-307023: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 20:22:16.363816   12512 start.go:364] duration metric: took 29.975µs to acquireMachinesLock for "addons-307023"
	I0528 20:22:16.363832   12512 start.go:93] Provisioning new machine with config: &{Name:addons-307023 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-307023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:22:16.363880   12512 start.go:125] createHost starting for "" (driver="kvm2")
	I0528 20:22:16.365325   12512 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0528 20:22:16.365460   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:22:16.365501   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:22:16.379003   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
	I0528 20:22:16.379390   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:22:16.379885   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:22:16.379912   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:22:16.380227   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:22:16.380389   12512 main.go:141] libmachine: (addons-307023) Calling .GetMachineName
	I0528 20:22:16.380527   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:16.380651   12512 start.go:159] libmachine.API.Create for "addons-307023" (driver="kvm2")
	I0528 20:22:16.380692   12512 client.go:168] LocalClient.Create starting
	I0528 20:22:16.380737   12512 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem
	I0528 20:22:16.644996   12512 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem
	I0528 20:22:16.858993   12512 main.go:141] libmachine: Running pre-create checks...
	I0528 20:22:16.859017   12512 main.go:141] libmachine: (addons-307023) Calling .PreCreateCheck
	I0528 20:22:16.859498   12512 main.go:141] libmachine: (addons-307023) Calling .GetConfigRaw
	I0528 20:22:16.859873   12512 main.go:141] libmachine: Creating machine...
	I0528 20:22:16.859886   12512 main.go:141] libmachine: (addons-307023) Calling .Create
	I0528 20:22:16.860034   12512 main.go:141] libmachine: (addons-307023) Creating KVM machine...
	I0528 20:22:16.861252   12512 main.go:141] libmachine: (addons-307023) DBG | found existing default KVM network
	I0528 20:22:16.862021   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:16.861873   12534 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0528 20:22:16.862042   12512 main.go:141] libmachine: (addons-307023) DBG | created network xml: 
	I0528 20:22:16.862058   12512 main.go:141] libmachine: (addons-307023) DBG | <network>
	I0528 20:22:16.862071   12512 main.go:141] libmachine: (addons-307023) DBG |   <name>mk-addons-307023</name>
	I0528 20:22:16.862080   12512 main.go:141] libmachine: (addons-307023) DBG |   <dns enable='no'/>
	I0528 20:22:16.862087   12512 main.go:141] libmachine: (addons-307023) DBG |   
	I0528 20:22:16.862097   12512 main.go:141] libmachine: (addons-307023) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0528 20:22:16.862109   12512 main.go:141] libmachine: (addons-307023) DBG |     <dhcp>
	I0528 20:22:16.862151   12512 main.go:141] libmachine: (addons-307023) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0528 20:22:16.862174   12512 main.go:141] libmachine: (addons-307023) DBG |     </dhcp>
	I0528 20:22:16.862185   12512 main.go:141] libmachine: (addons-307023) DBG |   </ip>
	I0528 20:22:16.862196   12512 main.go:141] libmachine: (addons-307023) DBG |   
	I0528 20:22:16.862201   12512 main.go:141] libmachine: (addons-307023) DBG | </network>
	I0528 20:22:16.862206   12512 main.go:141] libmachine: (addons-307023) DBG | 
	I0528 20:22:16.867691   12512 main.go:141] libmachine: (addons-307023) DBG | trying to create private KVM network mk-addons-307023 192.168.39.0/24...
	I0528 20:22:16.931381   12512 main.go:141] libmachine: (addons-307023) DBG | private KVM network mk-addons-307023 192.168.39.0/24 created
	I0528 20:22:16.931415   12512 main.go:141] libmachine: (addons-307023) Setting up store path in /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023 ...
	I0528 20:22:16.931440   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:16.931343   12534 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:22:16.931487   12512 main.go:141] libmachine: (addons-307023) Building disk image from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 20:22:16.931518   12512 main.go:141] libmachine: (addons-307023) Downloading /home/jenkins/minikube-integration/18966-3963/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 20:22:17.174781   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:17.174661   12534 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa...
	I0528 20:22:17.296769   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:17.296644   12534 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/addons-307023.rawdisk...
	I0528 20:22:17.296798   12512 main.go:141] libmachine: (addons-307023) DBG | Writing magic tar header
	I0528 20:22:17.296808   12512 main.go:141] libmachine: (addons-307023) DBG | Writing SSH key tar header
	I0528 20:22:17.296820   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:17.296747   12534 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023 ...
	I0528 20:22:17.296844   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023
	I0528 20:22:17.296861   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines
	I0528 20:22:17.296871   12512 main.go:141] libmachine: (addons-307023) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023 (perms=drwx------)
	I0528 20:22:17.296882   12512 main.go:141] libmachine: (addons-307023) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines (perms=drwxr-xr-x)
	I0528 20:22:17.296930   12512 main.go:141] libmachine: (addons-307023) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube (perms=drwxr-xr-x)
	I0528 20:22:17.296974   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:22:17.296985   12512 main.go:141] libmachine: (addons-307023) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963 (perms=drwxrwxr-x)
	I0528 20:22:17.297003   12512 main.go:141] libmachine: (addons-307023) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0528 20:22:17.297013   12512 main.go:141] libmachine: (addons-307023) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0528 20:22:17.297024   12512 main.go:141] libmachine: (addons-307023) Creating domain...
	I0528 20:22:17.297039   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963
	I0528 20:22:17.297052   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0528 20:22:17.297063   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home/jenkins
	I0528 20:22:17.297074   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home
	I0528 20:22:17.297082   12512 main.go:141] libmachine: (addons-307023) DBG | Skipping /home - not owner
	I0528 20:22:17.298037   12512 main.go:141] libmachine: (addons-307023) define libvirt domain using xml: 
	I0528 20:22:17.298062   12512 main.go:141] libmachine: (addons-307023) <domain type='kvm'>
	I0528 20:22:17.298075   12512 main.go:141] libmachine: (addons-307023)   <name>addons-307023</name>
	I0528 20:22:17.298085   12512 main.go:141] libmachine: (addons-307023)   <memory unit='MiB'>4000</memory>
	I0528 20:22:17.298094   12512 main.go:141] libmachine: (addons-307023)   <vcpu>2</vcpu>
	I0528 20:22:17.298101   12512 main.go:141] libmachine: (addons-307023)   <features>
	I0528 20:22:17.298114   12512 main.go:141] libmachine: (addons-307023)     <acpi/>
	I0528 20:22:17.298118   12512 main.go:141] libmachine: (addons-307023)     <apic/>
	I0528 20:22:17.298123   12512 main.go:141] libmachine: (addons-307023)     <pae/>
	I0528 20:22:17.298130   12512 main.go:141] libmachine: (addons-307023)     
	I0528 20:22:17.298135   12512 main.go:141] libmachine: (addons-307023)   </features>
	I0528 20:22:17.298145   12512 main.go:141] libmachine: (addons-307023)   <cpu mode='host-passthrough'>
	I0528 20:22:17.298156   12512 main.go:141] libmachine: (addons-307023)   
	I0528 20:22:17.298171   12512 main.go:141] libmachine: (addons-307023)   </cpu>
	I0528 20:22:17.298184   12512 main.go:141] libmachine: (addons-307023)   <os>
	I0528 20:22:17.298194   12512 main.go:141] libmachine: (addons-307023)     <type>hvm</type>
	I0528 20:22:17.298202   12512 main.go:141] libmachine: (addons-307023)     <boot dev='cdrom'/>
	I0528 20:22:17.298210   12512 main.go:141] libmachine: (addons-307023)     <boot dev='hd'/>
	I0528 20:22:17.298215   12512 main.go:141] libmachine: (addons-307023)     <bootmenu enable='no'/>
	I0528 20:22:17.298222   12512 main.go:141] libmachine: (addons-307023)   </os>
	I0528 20:22:17.298234   12512 main.go:141] libmachine: (addons-307023)   <devices>
	I0528 20:22:17.298245   12512 main.go:141] libmachine: (addons-307023)     <disk type='file' device='cdrom'>
	I0528 20:22:17.298262   12512 main.go:141] libmachine: (addons-307023)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/boot2docker.iso'/>
	I0528 20:22:17.298274   12512 main.go:141] libmachine: (addons-307023)       <target dev='hdc' bus='scsi'/>
	I0528 20:22:17.298286   12512 main.go:141] libmachine: (addons-307023)       <readonly/>
	I0528 20:22:17.298294   12512 main.go:141] libmachine: (addons-307023)     </disk>
	I0528 20:22:17.298304   12512 main.go:141] libmachine: (addons-307023)     <disk type='file' device='disk'>
	I0528 20:22:17.298321   12512 main.go:141] libmachine: (addons-307023)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0528 20:22:17.298334   12512 main.go:141] libmachine: (addons-307023)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/addons-307023.rawdisk'/>
	I0528 20:22:17.298342   12512 main.go:141] libmachine: (addons-307023)       <target dev='hda' bus='virtio'/>
	I0528 20:22:17.298350   12512 main.go:141] libmachine: (addons-307023)     </disk>
	I0528 20:22:17.298357   12512 main.go:141] libmachine: (addons-307023)     <interface type='network'>
	I0528 20:22:17.298365   12512 main.go:141] libmachine: (addons-307023)       <source network='mk-addons-307023'/>
	I0528 20:22:17.298372   12512 main.go:141] libmachine: (addons-307023)       <model type='virtio'/>
	I0528 20:22:17.298380   12512 main.go:141] libmachine: (addons-307023)     </interface>
	I0528 20:22:17.298392   12512 main.go:141] libmachine: (addons-307023)     <interface type='network'>
	I0528 20:22:17.298404   12512 main.go:141] libmachine: (addons-307023)       <source network='default'/>
	I0528 20:22:17.298412   12512 main.go:141] libmachine: (addons-307023)       <model type='virtio'/>
	I0528 20:22:17.298423   12512 main.go:141] libmachine: (addons-307023)     </interface>
	I0528 20:22:17.298432   12512 main.go:141] libmachine: (addons-307023)     <serial type='pty'>
	I0528 20:22:17.298444   12512 main.go:141] libmachine: (addons-307023)       <target port='0'/>
	I0528 20:22:17.298454   12512 main.go:141] libmachine: (addons-307023)     </serial>
	I0528 20:22:17.298473   12512 main.go:141] libmachine: (addons-307023)     <console type='pty'>
	I0528 20:22:17.298487   12512 main.go:141] libmachine: (addons-307023)       <target type='serial' port='0'/>
	I0528 20:22:17.298493   12512 main.go:141] libmachine: (addons-307023)     </console>
	I0528 20:22:17.298500   12512 main.go:141] libmachine: (addons-307023)     <rng model='virtio'>
	I0528 20:22:17.298507   12512 main.go:141] libmachine: (addons-307023)       <backend model='random'>/dev/random</backend>
	I0528 20:22:17.298514   12512 main.go:141] libmachine: (addons-307023)     </rng>
	I0528 20:22:17.298521   12512 main.go:141] libmachine: (addons-307023)     
	I0528 20:22:17.298530   12512 main.go:141] libmachine: (addons-307023)     
	I0528 20:22:17.298542   12512 main.go:141] libmachine: (addons-307023)   </devices>
	I0528 20:22:17.298552   12512 main.go:141] libmachine: (addons-307023) </domain>
	I0528 20:22:17.298562   12512 main.go:141] libmachine: (addons-307023) 
	I0528 20:22:17.304457   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:c0:73:53 in network default
	I0528 20:22:17.305064   12512 main.go:141] libmachine: (addons-307023) Ensuring networks are active...
	I0528 20:22:17.305083   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:17.305689   12512 main.go:141] libmachine: (addons-307023) Ensuring network default is active
	I0528 20:22:17.305962   12512 main.go:141] libmachine: (addons-307023) Ensuring network mk-addons-307023 is active
	I0528 20:22:17.306424   12512 main.go:141] libmachine: (addons-307023) Getting domain xml...
	I0528 20:22:17.307003   12512 main.go:141] libmachine: (addons-307023) Creating domain...
	I0528 20:22:18.666855   12512 main.go:141] libmachine: (addons-307023) Waiting to get IP...
	I0528 20:22:18.667741   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:18.668053   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:18.668103   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:18.668056   12534 retry.go:31] will retry after 254.097744ms: waiting for machine to come up
	I0528 20:22:18.923393   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:18.923770   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:18.923803   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:18.923739   12534 retry.go:31] will retry after 364.094801ms: waiting for machine to come up
	I0528 20:22:19.289187   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:19.289596   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:19.289619   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:19.289552   12534 retry.go:31] will retry after 304.027275ms: waiting for machine to come up
	I0528 20:22:19.594988   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:19.595336   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:19.595365   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:19.595282   12534 retry.go:31] will retry after 501.270308ms: waiting for machine to come up
	I0528 20:22:20.097808   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:20.098266   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:20.098293   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:20.098216   12534 retry.go:31] will retry after 460.735285ms: waiting for machine to come up
	I0528 20:22:20.560858   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:20.561409   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:20.561434   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:20.561366   12534 retry.go:31] will retry after 764.144242ms: waiting for machine to come up
	I0528 20:22:21.327164   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:21.327563   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:21.327593   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:21.327524   12534 retry.go:31] will retry after 891.559058ms: waiting for machine to come up
	I0528 20:22:22.222184   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:22.222606   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:22.222659   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:22.222564   12534 retry.go:31] will retry after 1.150241524s: waiting for machine to come up
	I0528 20:22:23.374802   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:23.375080   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:23.375100   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:23.375041   12534 retry.go:31] will retry after 1.424523439s: waiting for machine to come up
	I0528 20:22:24.801720   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:24.802188   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:24.802211   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:24.802153   12534 retry.go:31] will retry after 1.834091116s: waiting for machine to come up
	I0528 20:22:26.638045   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:26.638517   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:26.638546   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:26.638475   12534 retry.go:31] will retry after 2.55493296s: waiting for machine to come up
	I0528 20:22:29.196052   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:29.196485   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:29.196505   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:29.196438   12534 retry.go:31] will retry after 3.539361988s: waiting for machine to come up
	I0528 20:22:32.737402   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:32.737722   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:32.737742   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:32.737688   12534 retry.go:31] will retry after 4.468051148s: waiting for machine to come up
	I0528 20:22:37.206865   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.207376   12512 main.go:141] libmachine: (addons-307023) Found IP for machine: 192.168.39.230
	I0528 20:22:37.207401   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has current primary IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.207409   12512 main.go:141] libmachine: (addons-307023) Reserving static IP address...
	I0528 20:22:37.207773   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find host DHCP lease matching {name: "addons-307023", mac: "52:54:00:40:c7:f9", ip: "192.168.39.230"} in network mk-addons-307023
	I0528 20:22:37.275489   12512 main.go:141] libmachine: (addons-307023) DBG | Getting to WaitForSSH function...
	I0528 20:22:37.275522   12512 main.go:141] libmachine: (addons-307023) Reserved static IP address: 192.168.39.230
	I0528 20:22:37.275538   12512 main.go:141] libmachine: (addons-307023) Waiting for SSH to be available...
	I0528 20:22:37.278120   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.278539   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:minikube Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.278567   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.278777   12512 main.go:141] libmachine: (addons-307023) DBG | Using SSH client type: external
	I0528 20:22:37.278806   12512 main.go:141] libmachine: (addons-307023) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa (-rw-------)
	I0528 20:22:37.278835   12512 main.go:141] libmachine: (addons-307023) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 20:22:37.278848   12512 main.go:141] libmachine: (addons-307023) DBG | About to run SSH command:
	I0528 20:22:37.278864   12512 main.go:141] libmachine: (addons-307023) DBG | exit 0
	I0528 20:22:37.410138   12512 main.go:141] libmachine: (addons-307023) DBG | SSH cmd err, output: <nil>: 
	I0528 20:22:37.410422   12512 main.go:141] libmachine: (addons-307023) KVM machine creation complete!
	I0528 20:22:37.410742   12512 main.go:141] libmachine: (addons-307023) Calling .GetConfigRaw
	I0528 20:22:37.418632   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:37.418838   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:37.419005   12512 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0528 20:22:37.419017   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:22:37.420185   12512 main.go:141] libmachine: Detecting operating system of created instance...
	I0528 20:22:37.420201   12512 main.go:141] libmachine: Waiting for SSH to be available...
	I0528 20:22:37.420209   12512 main.go:141] libmachine: Getting to WaitForSSH function...
	I0528 20:22:37.420217   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:37.422444   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.422765   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.422793   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.422896   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:37.423051   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.423182   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.423333   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:37.423478   12512 main.go:141] libmachine: Using SSH client type: native
	I0528 20:22:37.423658   12512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0528 20:22:37.423673   12512 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0528 20:22:37.529140   12512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:22:37.529163   12512 main.go:141] libmachine: Detecting the provisioner...
	I0528 20:22:37.529172   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:37.531832   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.532181   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.532207   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.532352   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:37.532563   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.532748   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.532926   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:37.533115   12512 main.go:141] libmachine: Using SSH client type: native
	I0528 20:22:37.533302   12512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0528 20:22:37.533314   12512 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0528 20:22:37.642437   12512 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0528 20:22:37.642548   12512 main.go:141] libmachine: found compatible host: buildroot
	I0528 20:22:37.642568   12512 main.go:141] libmachine: Provisioning with buildroot...
	I0528 20:22:37.642581   12512 main.go:141] libmachine: (addons-307023) Calling .GetMachineName
	I0528 20:22:37.642813   12512 buildroot.go:166] provisioning hostname "addons-307023"
	I0528 20:22:37.642834   12512 main.go:141] libmachine: (addons-307023) Calling .GetMachineName
	I0528 20:22:37.643040   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:37.645636   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.646019   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.646159   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.646394   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:37.646608   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.646785   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.646898   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:37.647091   12512 main.go:141] libmachine: Using SSH client type: native
	I0528 20:22:37.647376   12512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0528 20:22:37.647398   12512 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-307023 && echo "addons-307023" | sudo tee /etc/hostname
	I0528 20:22:37.768968   12512 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-307023
	
	I0528 20:22:37.768993   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:37.772094   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.772460   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.772490   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.772679   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:37.772874   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.773062   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.773228   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:37.773415   12512 main.go:141] libmachine: Using SSH client type: native
	I0528 20:22:37.773621   12512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0528 20:22:37.773642   12512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-307023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-307023/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-307023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 20:22:37.887457   12512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:22:37.887485   12512 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 20:22:37.887526   12512 buildroot.go:174] setting up certificates
	I0528 20:22:37.887535   12512 provision.go:84] configureAuth start
	I0528 20:22:37.887545   12512 main.go:141] libmachine: (addons-307023) Calling .GetMachineName
	I0528 20:22:37.887808   12512 main.go:141] libmachine: (addons-307023) Calling .GetIP
	I0528 20:22:37.890652   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.890973   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.891011   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.891169   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:37.893216   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.893587   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.893610   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.893663   12512 provision.go:143] copyHostCerts
	I0528 20:22:37.893737   12512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 20:22:37.893877   12512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 20:22:37.893942   12512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 20:22:37.894022   12512 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.addons-307023 san=[127.0.0.1 192.168.39.230 addons-307023 localhost minikube]
	I0528 20:22:38.032283   12512 provision.go:177] copyRemoteCerts
	I0528 20:22:38.032333   12512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 20:22:38.032355   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:38.035593   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.035915   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.035978   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.036202   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:38.036374   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.036526   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:38.036686   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:22:38.120361   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 20:22:38.145488   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 20:22:38.170707   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 20:22:38.194342   12512 provision.go:87] duration metric: took 306.796879ms to configureAuth
	I0528 20:22:38.194369   12512 buildroot.go:189] setting minikube options for container-runtime
	I0528 20:22:38.194565   12512 config.go:182] Loaded profile config "addons-307023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:22:38.194648   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:38.197722   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.198093   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.198124   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.198418   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:38.198626   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.198807   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.198908   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:38.199075   12512 main.go:141] libmachine: Using SSH client type: native
	I0528 20:22:38.199246   12512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0528 20:22:38.199260   12512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 20:22:38.461596   12512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 20:22:38.461619   12512 main.go:141] libmachine: Checking connection to Docker...
	I0528 20:22:38.461627   12512 main.go:141] libmachine: (addons-307023) Calling .GetURL
	I0528 20:22:38.462953   12512 main.go:141] libmachine: (addons-307023) DBG | Using libvirt version 6000000
	I0528 20:22:38.465537   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.465908   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.465936   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.466238   12512 main.go:141] libmachine: Docker is up and running!
	I0528 20:22:38.466263   12512 main.go:141] libmachine: Reticulating splines...
	I0528 20:22:38.466270   12512 client.go:171] duration metric: took 22.085566975s to LocalClient.Create
	I0528 20:22:38.466286   12512 start.go:167] duration metric: took 22.085643295s to libmachine.API.Create "addons-307023"
	I0528 20:22:38.466293   12512 start.go:293] postStartSetup for "addons-307023" (driver="kvm2")
	I0528 20:22:38.466302   12512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 20:22:38.466318   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:38.466521   12512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 20:22:38.466549   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:38.468880   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.469195   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.469231   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.469363   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:38.469548   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.469687   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:38.469840   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:22:38.551927   12512 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 20:22:38.556169   12512 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 20:22:38.556192   12512 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 20:22:38.556265   12512 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 20:22:38.556293   12512 start.go:296] duration metric: took 89.994852ms for postStartSetup
	I0528 20:22:38.556326   12512 main.go:141] libmachine: (addons-307023) Calling .GetConfigRaw
	I0528 20:22:38.556878   12512 main.go:141] libmachine: (addons-307023) Calling .GetIP
	I0528 20:22:38.559569   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.559936   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.559965   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.560189   12512 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/config.json ...
	I0528 20:22:38.560367   12512 start.go:128] duration metric: took 22.196477548s to createHost
	I0528 20:22:38.560387   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:38.562784   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.563155   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.563180   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.563338   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:38.563527   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.563700   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.563868   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:38.564036   12512 main.go:141] libmachine: Using SSH client type: native
	I0528 20:22:38.564209   12512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0528 20:22:38.564226   12512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 20:22:38.670636   12512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716927758.641434286
	
	I0528 20:22:38.670661   12512 fix.go:216] guest clock: 1716927758.641434286
	I0528 20:22:38.670671   12512 fix.go:229] Guest: 2024-05-28 20:22:38.641434286 +0000 UTC Remote: 2024-05-28 20:22:38.56037762 +0000 UTC m=+22.294392597 (delta=81.056666ms)
	I0528 20:22:38.670696   12512 fix.go:200] guest clock delta is within tolerance: 81.056666ms
	I0528 20:22:38.670703   12512 start.go:83] releasing machines lock for "addons-307023", held for 22.306877351s
	I0528 20:22:38.670730   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:38.670994   12512 main.go:141] libmachine: (addons-307023) Calling .GetIP
	I0528 20:22:38.673545   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.673995   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.674022   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.674159   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:38.674595   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:38.674753   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:38.674845   12512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 20:22:38.674884   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:38.674928   12512 ssh_runner.go:195] Run: cat /version.json
	I0528 20:22:38.674955   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:38.677889   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.678112   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.678443   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.678472   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.678535   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.678573   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.678651   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:38.678832   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:38.678894   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.678986   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.679060   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:38.679323   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:38.679352   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:22:38.679499   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:22:38.764221   12512 ssh_runner.go:195] Run: systemctl --version
	I0528 20:22:38.792258   12512 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 20:22:38.961195   12512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 20:22:38.967494   12512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 20:22:38.967551   12512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 20:22:38.987030   12512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 20:22:38.987051   12512 start.go:494] detecting cgroup driver to use...
	I0528 20:22:38.987113   12512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 20:22:39.007816   12512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 20:22:39.024178   12512 docker.go:217] disabling cri-docker service (if available) ...
	I0528 20:22:39.024240   12512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 20:22:39.040535   12512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 20:22:39.056974   12512 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 20:22:39.187646   12512 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 20:22:39.329537   12512 docker.go:233] disabling docker service ...
	I0528 20:22:39.329623   12512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 20:22:39.344137   12512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 20:22:39.357150   12512 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 20:22:39.502803   12512 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 20:22:39.631687   12512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 20:22:39.645878   12512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 20:22:39.664157   12512 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 20:22:39.664235   12512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.675029   12512 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 20:22:39.675086   12512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.685890   12512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.696429   12512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.706996   12512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 20:22:39.717594   12512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.728286   12512 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.745432   12512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.756106   12512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 20:22:39.765607   12512 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 20:22:39.765666   12512 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 20:22:39.779281   12512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 20:22:39.789067   12512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:22:39.912673   12512 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 20:22:40.050327   12512 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 20:22:40.050408   12512 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 20:22:40.055035   12512 start.go:562] Will wait 60s for crictl version
	I0528 20:22:40.055097   12512 ssh_runner.go:195] Run: which crictl
	I0528 20:22:40.058958   12512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 20:22:40.097593   12512 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 20:22:40.097729   12512 ssh_runner.go:195] Run: crio --version
	I0528 20:22:40.125486   12512 ssh_runner.go:195] Run: crio --version
	I0528 20:22:40.158285   12512 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 20:22:40.159618   12512 main.go:141] libmachine: (addons-307023) Calling .GetIP
	I0528 20:22:40.162473   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:40.162822   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:40.162851   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:40.163047   12512 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 20:22:40.167411   12512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:22:40.180365   12512 kubeadm.go:877] updating cluster {Name:addons-307023 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-307023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 20:22:40.180486   12512 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:22:40.180529   12512 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 20:22:40.212715   12512 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0528 20:22:40.212793   12512 ssh_runner.go:195] Run: which lz4
	I0528 20:22:40.217038   12512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 20:22:40.221447   12512 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 20:22:40.221475   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0528 20:22:41.541443   12512 crio.go:462] duration metric: took 1.324445637s to copy over tarball
	I0528 20:22:41.541511   12512 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 20:22:43.770641   12512 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.229100733s)
	I0528 20:22:43.770675   12512 crio.go:469] duration metric: took 2.229208312s to extract the tarball
	I0528 20:22:43.770682   12512 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 20:22:43.808573   12512 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 20:22:43.851141   12512 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 20:22:43.851164   12512 cache_images.go:84] Images are preloaded, skipping loading
	I0528 20:22:43.851171   12512 kubeadm.go:928] updating node { 192.168.39.230 8443 v1.30.1 crio true true} ...
	I0528 20:22:43.851267   12512 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-307023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-307023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 20:22:43.851325   12512 ssh_runner.go:195] Run: crio config
	I0528 20:22:43.898904   12512 cni.go:84] Creating CNI manager for ""
	I0528 20:22:43.898928   12512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 20:22:43.898940   12512 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 20:22:43.898968   12512 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-307023 NodeName:addons-307023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 20:22:43.899105   12512 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-307023"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 20:22:43.899162   12512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 20:22:43.909685   12512 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 20:22:43.909752   12512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 20:22:43.919534   12512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0528 20:22:43.936893   12512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 20:22:43.953778   12512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0528 20:22:43.970704   12512 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0528 20:22:43.974802   12512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:22:43.987166   12512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:22:44.111756   12512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:22:44.129559   12512 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023 for IP: 192.168.39.230
	I0528 20:22:44.129578   12512 certs.go:194] generating shared ca certs ...
	I0528 20:22:44.129593   12512 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.129728   12512 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 20:22:44.304591   12512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt ...
	I0528 20:22:44.304617   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt: {Name:mkf12219490495734c93ec1a852db4cdd558f74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.304799   12512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key ...
	I0528 20:22:44.304817   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key: {Name:mk6f16953334bbe6cb1ef60b5d82f2adc64cf131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.304916   12512 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 20:22:44.563093   12512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt ...
	I0528 20:22:44.563125   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt: {Name:mk26fe5087377e64623e3b97df2d91a014dc6cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.563294   12512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key ...
	I0528 20:22:44.563310   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key: {Name:mk9016fed3ac742477d4dd344b94def9b07486f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.563405   12512 certs.go:256] generating profile certs ...
	I0528 20:22:44.563469   12512 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.key
	I0528 20:22:44.563488   12512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt with IP's: []
	I0528 20:22:44.789228   12512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt ...
	I0528 20:22:44.789262   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: {Name:mk8081754a912d12b3b37a8bb3f19ba0a05b95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.789436   12512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.key ...
	I0528 20:22:44.789447   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.key: {Name:mk904f655e4f408646229d0357f533e8ac438914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.789515   12512 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.key.25a98af5
	I0528 20:22:44.789533   12512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.crt.25a98af5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230]
	I0528 20:22:44.881091   12512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.crt.25a98af5 ...
	I0528 20:22:44.881123   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.crt.25a98af5: {Name:mk77497c8eb56a50e975cffeb9c1ba646e4de9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.881283   12512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.key.25a98af5 ...
	I0528 20:22:44.881297   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.key.25a98af5: {Name:mk1a691ced54247f535d479d0911900c03983ca6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.881362   12512 certs.go:381] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.crt.25a98af5 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.crt
	I0528 20:22:44.881427   12512 certs.go:385] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.key.25a98af5 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.key
	I0528 20:22:44.881475   12512 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.key
	I0528 20:22:44.881491   12512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.crt with IP's: []
	I0528 20:22:44.971782   12512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.crt ...
	I0528 20:22:44.971818   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.crt: {Name:mk522e279cdecac94035a78ba55093e7ea0233ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.971983   12512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.key ...
	I0528 20:22:44.971994   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.key: {Name:mkc2a01f45a46df7e3eb50b70f86bb7a229ad840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.972146   12512 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 20:22:44.972178   12512 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 20:22:44.972199   12512 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 20:22:44.972222   12512 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 20:22:44.972727   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 20:22:45.029934   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 20:22:45.056736   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 20:22:45.080415   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 20:22:45.104571   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0528 20:22:45.128462   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 20:22:45.152463   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 20:22:45.176387   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 20:22:45.200966   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 20:22:45.229791   12512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 20:22:45.248613   12512 ssh_runner.go:195] Run: openssl version
	I0528 20:22:45.254908   12512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 20:22:45.265897   12512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:22:45.270624   12512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:22:45.270681   12512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:22:45.276613   12512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 20:22:45.287371   12512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 20:22:45.291737   12512 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 20:22:45.291788   12512 kubeadm.go:391] StartCluster: {Name:addons-307023 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-307023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:22:45.291878   12512 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 20:22:45.291947   12512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 20:22:45.336082   12512 cri.go:89] found id: ""
	I0528 20:22:45.336151   12512 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 20:22:45.346154   12512 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 20:22:45.356088   12512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 20:22:45.365805   12512 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 20:22:45.365835   12512 kubeadm.go:156] found existing configuration files:
	
	I0528 20:22:45.365884   12512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 20:22:45.375013   12512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 20:22:45.375063   12512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 20:22:45.384434   12512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 20:22:45.393549   12512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 20:22:45.393612   12512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 20:22:45.403609   12512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 20:22:45.413189   12512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 20:22:45.413252   12512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 20:22:45.423165   12512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 20:22:45.432522   12512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 20:22:45.432577   12512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 20:22:45.442848   12512 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 20:22:45.499395   12512 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 20:22:45.499476   12512 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 20:22:45.628359   12512 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 20:22:45.628475   12512 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 20:22:45.628585   12512 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 20:22:45.837919   12512 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 20:22:46.095484   12512 out.go:204]   - Generating certificates and keys ...
	I0528 20:22:46.095628   12512 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 20:22:46.095757   12512 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 20:22:46.095923   12512 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 20:22:46.196705   12512 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 20:22:46.664394   12512 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 20:22:46.826723   12512 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 20:22:46.980540   12512 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 20:22:46.980742   12512 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-307023 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I0528 20:22:47.128366   12512 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 20:22:47.128553   12512 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-307023 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I0528 20:22:47.457912   12512 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 20:22:47.508317   12512 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 20:22:47.761559   12512 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 20:22:47.761624   12512 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 20:22:47.872355   12512 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 20:22:48.075763   12512 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 20:22:48.234986   12512 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 20:22:48.422832   12512 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 20:22:48.588090   12512 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 20:22:48.588697   12512 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 20:22:48.592972   12512 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 20:22:48.619220   12512 out.go:204]   - Booting up control plane ...
	I0528 20:22:48.619338   12512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 20:22:48.619427   12512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 20:22:48.619518   12512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 20:22:48.619670   12512 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 20:22:48.619792   12512 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 20:22:48.619874   12512 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 20:22:48.758621   12512 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 20:22:48.758699   12512 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 20:22:49.259273   12512 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.094252ms
	I0528 20:22:49.259387   12512 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 20:22:54.758266   12512 kubeadm.go:309] [api-check] The API server is healthy after 5.501922128s
	I0528 20:22:54.774818   12512 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 20:22:54.793326   12512 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 20:22:54.822075   12512 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 20:22:54.822317   12512 kubeadm.go:309] [mark-control-plane] Marking the node addons-307023 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 20:22:54.833078   12512 kubeadm.go:309] [bootstrap-token] Using token: dnpxo0.wrjqml256vgz5hhv
	I0528 20:22:54.834464   12512 out.go:204]   - Configuring RBAC rules ...
	I0528 20:22:54.834613   12512 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 20:22:54.839099   12512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 20:22:54.849731   12512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 20:22:54.853625   12512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 20:22:54.857161   12512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 20:22:54.860972   12512 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 20:22:55.165222   12512 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 20:22:55.607599   12512 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 20:22:56.164530   12512 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 20:22:56.164556   12512 kubeadm.go:309] 
	I0528 20:22:56.164620   12512 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 20:22:56.164632   12512 kubeadm.go:309] 
	I0528 20:22:56.164721   12512 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 20:22:56.164732   12512 kubeadm.go:309] 
	I0528 20:22:56.164777   12512 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 20:22:56.164883   12512 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 20:22:56.164964   12512 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 20:22:56.164974   12512 kubeadm.go:309] 
	I0528 20:22:56.165036   12512 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 20:22:56.165048   12512 kubeadm.go:309] 
	I0528 20:22:56.165102   12512 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 20:22:56.165111   12512 kubeadm.go:309] 
	I0528 20:22:56.165180   12512 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 20:22:56.165296   12512 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 20:22:56.165389   12512 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 20:22:56.165405   12512 kubeadm.go:309] 
	I0528 20:22:56.165478   12512 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 20:22:56.165544   12512 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 20:22:56.165551   12512 kubeadm.go:309] 
	I0528 20:22:56.165618   12512 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token dnpxo0.wrjqml256vgz5hhv \
	I0528 20:22:56.165714   12512 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb \
	I0528 20:22:56.165733   12512 kubeadm.go:309] 	--control-plane 
	I0528 20:22:56.165739   12512 kubeadm.go:309] 
	I0528 20:22:56.165868   12512 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 20:22:56.165885   12512 kubeadm.go:309] 
	I0528 20:22:56.166007   12512 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token dnpxo0.wrjqml256vgz5hhv \
	I0528 20:22:56.166147   12512 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb 
	I0528 20:22:56.166401   12512 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 20:22:56.166427   12512 cni.go:84] Creating CNI manager for ""
	I0528 20:22:56.166435   12512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 20:22:56.168305   12512 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 20:22:56.169640   12512 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 20:22:56.180413   12512 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 20:22:56.199324   12512 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 20:22:56.199403   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:56.199463   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-307023 minikube.k8s.io/updated_at=2024_05_28T20_22_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=addons-307023 minikube.k8s.io/primary=true
	I0528 20:22:56.222548   12512 ops.go:34] apiserver oom_adj: -16
	I0528 20:22:56.336687   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:56.837360   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:57.337005   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:57.837162   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:58.336867   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:58.837058   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:59.337480   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:59.837523   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:00.336774   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:00.836910   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:01.337664   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:01.836927   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:02.337625   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:02.837710   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:03.337117   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:03.837064   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:04.337743   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:04.836799   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:05.337104   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:05.836943   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:06.337349   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:06.837365   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:07.337549   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:07.836707   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:08.337681   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:08.837464   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:09.336953   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:09.836789   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:10.336885   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:10.427885   12512 kubeadm.go:1107] duration metric: took 14.228541597s to wait for elevateKubeSystemPrivileges
	W0528 20:23:10.427930   12512 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 20:23:10.427941   12512 kubeadm.go:393] duration metric: took 25.136155888s to StartCluster
	I0528 20:23:10.427960   12512 settings.go:142] acquiring lock: {Name:mkc1182212d780ab38539839372c25eb989b2e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:23:10.428087   12512 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:23:10.428544   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:23:10.428741   12512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0528 20:23:10.428765   12512 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:23:10.430753   12512 out.go:177] * Verifying Kubernetes components...
	I0528 20:23:10.428826   12512 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0528 20:23:10.428927   12512 config.go:182] Loaded profile config "addons-307023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:23:10.432118   12512 addons.go:69] Setting yakd=true in profile "addons-307023"
	I0528 20:23:10.432130   12512 addons.go:69] Setting inspektor-gadget=true in profile "addons-307023"
	I0528 20:23:10.432154   12512 addons.go:69] Setting storage-provisioner=true in profile "addons-307023"
	I0528 20:23:10.432165   12512 addons.go:234] Setting addon inspektor-gadget=true in "addons-307023"
	I0528 20:23:10.432168   12512 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-307023"
	I0528 20:23:10.432170   12512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:23:10.432183   12512 addons.go:69] Setting metrics-server=true in profile "addons-307023"
	I0528 20:23:10.432190   12512 addons.go:69] Setting gcp-auth=true in profile "addons-307023"
	I0528 20:23:10.432200   12512 addons.go:69] Setting volcano=true in profile "addons-307023"
	I0528 20:23:10.432206   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432218   12512 addons.go:69] Setting registry=true in profile "addons-307023"
	I0528 20:23:10.432224   12512 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-307023"
	I0528 20:23:10.432226   12512 addons.go:69] Setting volumesnapshots=true in profile "addons-307023"
	I0528 20:23:10.432178   12512 addons.go:234] Setting addon storage-provisioner=true in "addons-307023"
	I0528 20:23:10.432243   12512 addons.go:234] Setting addon volumesnapshots=true in "addons-307023"
	I0528 20:23:10.432247   12512 addons.go:69] Setting default-storageclass=true in profile "addons-307023"
	I0528 20:23:10.432240   12512 addons.go:69] Setting ingress=true in profile "addons-307023"
	I0528 20:23:10.432266   12512 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-307023"
	I0528 20:23:10.432266   12512 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-307023"
	I0528 20:23:10.432272   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432274   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432283   12512 addons.go:234] Setting addon ingress=true in "addons-307023"
	I0528 20:23:10.432290   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432321   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432191   12512 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-307023"
	I0528 20:23:10.432674   12512 addons.go:69] Setting helm-tiller=true in profile "addons-307023"
	I0528 20:23:10.432679   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.432698   12512 addons.go:234] Setting addon helm-tiller=true in "addons-307023"
	I0528 20:23:10.432713   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.432719   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432218   12512 addons.go:234] Setting addon volcano=true in "addons-307023"
	I0528 20:23:10.432673   12512 addons.go:69] Setting ingress-dns=true in profile "addons-307023"
	I0528 20:23:10.432792   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432810   12512 addons.go:234] Setting addon ingress-dns=true in "addons-307023"
	I0528 20:23:10.432848   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.433034   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433057   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433060   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.433074   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.433092   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433108   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.433113   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.432209   12512 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-307023"
	I0528 20:23:10.433129   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.433143   12512 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-307023"
	I0528 20:23:10.432201   12512 addons.go:234] Setting addon metrics-server=true in "addons-307023"
	I0528 20:23:10.432210   12512 mustload.go:65] Loading cluster: addons-307023
	I0528 20:23:10.432216   12512 addons.go:69] Setting cloud-spanner=true in profile "addons-307023"
	I0528 20:23:10.433293   12512 addons.go:234] Setting addon cloud-spanner=true in "addons-307023"
	I0528 20:23:10.433321   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.433360   12512 config.go:182] Loaded profile config "addons-307023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:23:10.433425   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432242   12512 addons.go:234] Setting addon registry=true in "addons-307023"
	I0528 20:23:10.433632   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.433677   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433720   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.433680   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433803   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.432663   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433854   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.432665   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433906   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.433930   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.432158   12512 addons.go:234] Setting addon yakd=true in "addons-307023"
	I0528 20:23:10.432669   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.432669   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.434050   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.434061   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.432663   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.434090   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.434125   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.434135   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.434100   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.434274   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.434471   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.454031   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0528 20:23:10.454068   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46781
	I0528 20:23:10.454103   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44853
	I0528 20:23:10.454444   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.454498   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.454544   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I0528 20:23:10.454886   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.455063   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.455081   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.455139   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.455148   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.455157   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.455465   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.455469   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.455702   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.455722   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.456012   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.456060   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.456169   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.458104   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.458119   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.458169   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34781
	I0528 20:23:10.458554   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.458908   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.459094   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.459456   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.459466   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.459663   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.459754   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.462264   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.462658   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.462775   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.464641   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42345
	I0528 20:23:10.466058   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.466096   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.466358   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.466394   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.466879   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.466899   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.466961   12512 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-307023"
	I0528 20:23:10.467009   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.467365   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.467402   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.467622   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.467657   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.474151   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I0528 20:23:10.474695   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.475268   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.475649   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.475672   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.476177   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.476193   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.476337   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.476868   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.476915   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.477479   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.478084   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.478134   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.502516   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34847
	I0528 20:23:10.503146   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.503228   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0528 20:23:10.503310   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40083
	I0528 20:23:10.503672   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.503689   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.503750   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.503772   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.504172   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.504199   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.504302   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.504324   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.504660   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.504668   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.505209   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.505246   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.505250   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.505278   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.505643   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0528 20:23:10.505797   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.505825   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0528 20:23:10.506035   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.507541   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.507794   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.508037   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.508052   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.509828   12512 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0528 20:23:10.508390   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.508845   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40831
	I0528 20:23:10.511367   12512 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0528 20:23:10.511381   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0528 20:23:10.511398   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.512054   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.512098   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.512573   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.513035   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.513058   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.513360   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.513875   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.513913   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.514319   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.514651   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.514668   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.514874   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.515066   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.515225   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.515425   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.518162   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.518755   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.518770   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.518827   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33807
	I0528 20:23:10.519304   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.519700   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.519733   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.519935   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.520355   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.520374   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.520430   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
	I0528 20:23:10.520848   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.521014   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.522126   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0528 20:23:10.522534   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.523010   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.523026   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.523376   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.523902   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.523939   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.524806   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.526859   12512 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0528 20:23:10.525444   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.526753   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0528 20:23:10.528374   12512 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0528 20:23:10.528387   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0528 20:23:10.528405   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.529209   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.529229   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.529587   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.529666   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.529847   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.530069   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0528 20:23:10.530502   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.530748   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.530764   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.530942   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.530955   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.531013   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40701
	I0528 20:23:10.531408   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.531603   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.531799   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.531853   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.531966   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.531977   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.532365   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.532417   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.532546   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.534176   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.534221   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.534464   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.534564   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.534583   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.536093   12512 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0528 20:23:10.537276   12512 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0528 20:23:10.537295   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0528 20:23:10.537312   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.536183   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.534845   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.536216   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.537554   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.537742   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.537843   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0528 20:23:10.538010   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.538164   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:10.538174   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:10.541031   12512 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0528 20:23:10.538465   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:10.538490   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:10.538601   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.540563   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
	I0528 20:23:10.540601   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.541133   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.542693   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I0528 20:23:10.544877   12512 out.go:177]   - Using image docker.io/registry:2.8.3
	I0528 20:23:10.543373   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33251
	I0528 20:23:10.543618   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:10.545003   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:10.545012   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:10.543655   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.545065   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.543985   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.544120   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.544254   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.545182   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.544589   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.548079   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.548167   12512 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0528 20:23:10.548188   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0528 20:23:10.545313   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:10.545335   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:10.548211   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.548227   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:10.545406   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.545525   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.547298   12512 main.go:141] libmachine: Using API Version  1
	W0528 20:23:10.548304   12512 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0528 20:23:10.548313   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.547586   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45531
	I0528 20:23:10.547725   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.548424   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.548482   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.548679   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.550302   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36343
	I0528 20:23:10.550477   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.550725   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.551002   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.551132   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.551149   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.551948   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.551968   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.552962   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
	I0528 20:23:10.553389   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.553929   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.554126   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.554167   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.554265   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.554373   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.554394   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.554426   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.554493   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.554696   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.554712   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.554724   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.554749   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.554929   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.556730   12512 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0528 20:23:10.554727   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.555145   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.555724   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.556233   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.556356   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.558125   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.558180   12512 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0528 20:23:10.558198   12512 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0528 20:23:10.558230   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.558450   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.558740   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.558763   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.558810   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.559312   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.559355   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.562411   12512 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.1
	I0528 20:23:10.559542   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.561847   12512 addons.go:234] Setting addon default-storageclass=true in "addons-307023"
	I0528 20:23:10.562039   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.562600   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.563582   12512 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0528 20:23:10.563596   12512 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0528 20:23:10.563198   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43005
	I0528 20:23:10.563610   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.563674   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.563716   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.563738   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.564024   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.564050   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.564589   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.564816   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.564977   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.565482   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.565669   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37547
	I0528 20:23:10.567114   12512 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0528 20:23:10.566026   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.566281   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.568251   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.569832   12512 out.go:177]   - Using image docker.io/busybox:stable
	I0528 20:23:10.568629   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.568641   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44153
	I0528 20:23:10.568937   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.569084   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.569084   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.570316   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34443
	I0528 20:23:10.571170   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.571197   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.571254   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.571345   12512 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0528 20:23:10.571357   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0528 20:23:10.571374   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.571403   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.571533   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.571600   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.571736   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.572014   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.572071   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.572203   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.572874   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.572948   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.573644   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.573665   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.573952   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.573974   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.574448   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.574654   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.576143   12512 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 20:23:10.574998   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.575306   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.576099   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.576724   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.577484   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0528 20:23:10.578723   12512 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0528 20:23:10.578742   12512 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0528 20:23:10.578761   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.577510   12512 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 20:23:10.578797   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 20:23:10.578812   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.577546   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.578868   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.577802   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.577806   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.577834   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.578703   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34787
	I0528 20:23:10.579107   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.579378   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.579452   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.580120   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.580136   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.580827   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.581050   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.581226   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.583095   12512 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0528 20:23:10.583117   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.584608   12512 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0528 20:23:10.584621   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0528 20:23:10.584639   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.586170   12512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0528 20:23:10.583528   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.583553   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.584044   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.584266   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.584829   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.587602   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.587629   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.588808   12512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 20:23:10.587699   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.587758   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.587817   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.588275   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.588891   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.589831   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.591900   12512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 20:23:10.590032   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.590061   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.590820   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.590836   12512 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0528 20:23:10.591039   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.592989   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39947
	I0528 20:23:10.593007   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0528 20:23:10.594344   12512 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0528 20:23:10.594359   12512 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0528 20:23:10.594373   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.593294   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.593294   12512 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0528 20:23:10.594418   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0528 20:23:10.594429   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.593435   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.593479   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.593487   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.593593   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.593678   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.595313   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.595399   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.595418   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.595644   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.595703   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.595759   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.596661   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.596833   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.597103   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.597377   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.598018   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.598157   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.598293   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.598321   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.598467   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.598617   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.598636   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.598660   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.598767   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.598838   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.598995   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.599096   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.599259   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.599315   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.599510   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.601019   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0528 20:23:10.602470   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0528 20:23:10.603727   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0528 20:23:10.604920   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0528 20:23:10.606129   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0528 20:23:10.607284   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0528 20:23:10.608612   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0528 20:23:10.609959   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0528 20:23:10.611260   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0528 20:23:10.611281   12512 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0528 20:23:10.611306   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.614756   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.615168   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.615195   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.615336   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.615538   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.615673   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.615824   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	W0528 20:23:10.627769   12512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50730->192.168.39.230:22: read: connection reset by peer
	I0528 20:23:10.627802   12512 retry.go:31] will retry after 342.148262ms: ssh: handshake failed: read tcp 192.168.39.1:50730->192.168.39.230:22: read: connection reset by peer
	W0528 20:23:10.627869   12512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50732->192.168.39.230:22: read: connection reset by peer
	I0528 20:23:10.627881   12512 retry.go:31] will retry after 154.623703ms: ssh: handshake failed: read tcp 192.168.39.1:50732->192.168.39.230:22: read: connection reset by peer
	W0528 20:23:10.627994   12512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50748->192.168.39.230:22: read: connection reset by peer
	I0528 20:23:10.628019   12512 retry.go:31] will retry after 154.109106ms: ssh: handshake failed: read tcp 192.168.39.1:50748->192.168.39.230:22: read: connection reset by peer
	I0528 20:23:10.641978   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0528 20:23:10.642431   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.642922   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.642937   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.643293   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.643471   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.645452   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.645842   12512 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 20:23:10.645861   12512 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 20:23:10.645879   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.648979   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.649425   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.649451   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.649625   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.649825   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.650008   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.650152   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	W0528 20:23:10.653783   12512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50754->192.168.39.230:22: read: connection reset by peer
	I0528 20:23:10.653807   12512 retry.go:31] will retry after 167.254965ms: ssh: handshake failed: read tcp 192.168.39.1:50754->192.168.39.230:22: read: connection reset by peer
	I0528 20:23:10.968648   12512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:23:10.968978   12512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0528 20:23:10.983115   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0528 20:23:11.000531   12512 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0528 20:23:11.000551   12512 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0528 20:23:11.018403   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0528 20:23:11.085444   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0528 20:23:11.110037   12512 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0528 20:23:11.110056   12512 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0528 20:23:11.116782   12512 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0528 20:23:11.116797   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0528 20:23:11.140739   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 20:23:11.156167   12512 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0528 20:23:11.156192   12512 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0528 20:23:11.158811   12512 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0528 20:23:11.158836   12512 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0528 20:23:11.180971   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0528 20:23:11.182035   12512 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0528 20:23:11.182053   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0528 20:23:11.233563   12512 node_ready.go:35] waiting up to 6m0s for node "addons-307023" to be "Ready" ...
	I0528 20:23:11.239265   12512 node_ready.go:49] node "addons-307023" has status "Ready":"True"
	I0528 20:23:11.239298   12512 node_ready.go:38] duration metric: took 5.695157ms for node "addons-307023" to be "Ready" ...
	I0528 20:23:11.239311   12512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:23:11.250195   12512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hmjmn" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:11.292822   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0528 20:23:11.292847   12512 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0528 20:23:11.311403   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 20:23:11.320200   12512 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0528 20:23:11.320232   12512 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0528 20:23:11.379938   12512 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0528 20:23:11.379964   12512 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0528 20:23:11.383564   12512 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0528 20:23:11.383582   12512 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0528 20:23:11.385897   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0528 20:23:11.394584   12512 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0528 20:23:11.394611   12512 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0528 20:23:11.415311   12512 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0528 20:23:11.415337   12512 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0528 20:23:11.469500   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0528 20:23:11.469522   12512 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0528 20:23:11.477009   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0528 20:23:11.540329   12512 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0528 20:23:11.540363   12512 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0528 20:23:11.544803   12512 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 20:23:11.544822   12512 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0528 20:23:11.620898   12512 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0528 20:23:11.620931   12512 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0528 20:23:11.639257   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0528 20:23:11.711000   12512 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0528 20:23:11.711022   12512 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0528 20:23:11.713993   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0528 20:23:11.714009   12512 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0528 20:23:11.742124   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0528 20:23:11.742151   12512 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0528 20:23:11.780020   12512 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0528 20:23:11.780043   12512 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0528 20:23:11.834451   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 20:23:11.884374   12512 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0528 20:23:11.884396   12512 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0528 20:23:11.956177   12512 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 20:23:11.956217   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0528 20:23:12.008160   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0528 20:23:12.008185   12512 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0528 20:23:12.070468   12512 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0528 20:23:12.070487   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0528 20:23:12.159915   12512 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0528 20:23:12.159936   12512 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0528 20:23:12.315185   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 20:23:12.341966   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0528 20:23:12.341991   12512 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0528 20:23:12.438960   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0528 20:23:12.633406   12512 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0528 20:23:12.633436   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0528 20:23:12.647501   12512 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0528 20:23:12.647528   12512 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0528 20:23:12.878290   12512 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0528 20:23:12.878314   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0528 20:23:12.937778   12512 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0528 20:23:12.937808   12512 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0528 20:23:13.221393   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0528 20:23:13.256685   12512 pod_ready.go:102] pod "coredns-7db6d8ff4d-hmjmn" in "kube-system" namespace has status "Ready":"False"
	I0528 20:23:13.268417   12512 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0528 20:23:13.268436   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0528 20:23:13.412451   12512 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0528 20:23:13.412476   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0528 20:23:13.819282   12512 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.850265429s)
	I0528 20:23:13.819313   12512 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0528 20:23:13.819322   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.836171727s)
	I0528 20:23:13.819368   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:13.819382   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:13.819718   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:13.819739   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:13.819759   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:13.819845   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:13.819867   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:13.820200   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:13.820215   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:13.909469   12512 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0528 20:23:13.909492   12512 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0528 20:23:14.218739   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0528 20:23:14.333140   12512 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-307023" context rescaled to 1 replicas
	I0528 20:23:15.261589   12512 pod_ready.go:102] pod "coredns-7db6d8ff4d-hmjmn" in "kube-system" namespace has status "Ready":"False"
	I0528 20:23:15.734878   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.716428977s)
	I0528 20:23:15.734945   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.734945   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.649468195s)
	I0528 20:23:15.734961   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.734985   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.734996   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.594231216s)
	I0528 20:23:15.735003   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735025   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735036   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735049   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.554047986s)
	I0528 20:23:15.735069   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735076   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.423647935s)
	I0528 20:23:15.735081   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735095   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735105   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735115   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.349193179s)
	I0528 20:23:15.735134   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735147   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735474   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.735488   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.735497   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735505   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735555   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.735562   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.735570   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735577   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735858   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.735889   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.735895   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.735903   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735910   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.736021   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.736062   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.736068   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.736075   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.736081   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.736183   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.736222   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.736227   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.736266   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.736280   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.736289   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.736302   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.736307   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.736312   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.736316   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.736364   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.736370   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.736377   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.736384   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.737131   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.737159   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.737165   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.737703   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.737729   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.737736   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.738069   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.738099   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.738107   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.738257   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.738283   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.738289   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.738348   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.738358   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.738366   12512 addons.go:475] Verifying addon registry=true in "addons-307023"
	I0528 20:23:15.740100   12512 out.go:177] * Verifying registry addon...
	I0528 20:23:15.742105   12512 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0528 20:23:15.859525   12512 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0528 20:23:15.859544   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:15.982573   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.982601   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.982887   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.982910   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.982918   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	W0528 20:23:15.983009   12512 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0528 20:23:16.008816   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:16.008839   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:16.009213   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:16.009234   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:16.274665   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:16.289932   12512 pod_ready.go:92] pod "coredns-7db6d8ff4d-hmjmn" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:16.289977   12512 pod_ready.go:81] duration metric: took 5.039759258s for pod "coredns-7db6d8ff4d-hmjmn" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.289990   12512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4qdk" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.317839   12512 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4qdk" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:16.317860   12512 pod_ready.go:81] duration metric: took 27.863115ms for pod "coredns-7db6d8ff4d-p4qdk" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.317869   12512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.351327   12512 pod_ready.go:92] pod "etcd-addons-307023" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:16.351352   12512 pod_ready.go:81] duration metric: took 33.469285ms for pod "etcd-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.351364   12512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.390763   12512 pod_ready.go:92] pod "kube-apiserver-addons-307023" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:16.390783   12512 pod_ready.go:81] duration metric: took 39.411236ms for pod "kube-apiserver-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.390793   12512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.416036   12512 pod_ready.go:92] pod "kube-controller-manager-addons-307023" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:16.416057   12512 pod_ready.go:81] duration metric: took 25.257529ms for pod "kube-controller-manager-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.416070   12512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zm9r7" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.663062   12512 pod_ready.go:92] pod "kube-proxy-zm9r7" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:16.663086   12512 pod_ready.go:81] duration metric: took 247.006121ms for pod "kube-proxy-zm9r7" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.663097   12512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.788485   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:17.055690   12512 pod_ready.go:92] pod "kube-scheduler-addons-307023" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:17.055719   12512 pod_ready.go:81] duration metric: took 392.614322ms for pod "kube-scheduler-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:17.055730   12512 pod_ready.go:38] duration metric: took 5.816404218s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:23:17.055749   12512 api_server.go:52] waiting for apiserver process to appear ...
	I0528 20:23:17.055814   12512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:23:17.252721   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:17.567115   12512 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0528 20:23:17.567162   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:17.570676   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:17.571214   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:17.571245   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:17.571535   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:17.571746   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:17.571908   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:17.572044   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:17.748141   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:17.789839   12512 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0528 20:23:18.010045   12512 addons.go:234] Setting addon gcp-auth=true in "addons-307023"
	I0528 20:23:18.010099   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:18.010416   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:18.010448   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:18.024864   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0528 20:23:18.025355   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:18.025900   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:18.025924   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:18.026198   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:18.026785   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:18.026820   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:18.041280   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36089
	I0528 20:23:18.041700   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:18.042173   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:18.042194   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:18.042515   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:18.042720   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:18.044344   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:18.044577   12512 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0528 20:23:18.044598   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:18.047208   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:18.047674   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:18.047700   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:18.047896   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:18.048086   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:18.048259   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:18.048389   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:18.247959   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:18.757391   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:19.247239   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:19.775262   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:19.786311   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.30926316s)
	I0528 20:23:19.786367   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.786382   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.786389   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.147087587s)
	I0528 20:23:19.786433   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.786453   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.786459   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.951966101s)
	I0528 20:23:19.786482   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.786492   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.786743   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.786761   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.786771   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.786765   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.471539652s)
	I0528 20:23:19.786780   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	W0528 20:23:19.786811   12512 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0528 20:23:19.786841   12512 retry.go:31] will retry after 201.548356ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0528 20:23:19.786901   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.347907855s)
	I0528 20:23:19.786919   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.786928   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.787041   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.787067   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.565646841s)
	I0528 20:23:19.787078   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.787086   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.787088   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.787095   12512 addons.go:475] Verifying addon ingress=true in "addons-307023"
	I0528 20:23:19.787134   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.790000   12512 out.go:177] * Verifying ingress addon...
	I0528 20:23:19.787161   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.787178   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.787186   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.787200   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.787224   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.787098   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.791511   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.791527   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.791528   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.791538   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.791544   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.791552   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.791541   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.791618   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.791607   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.791797   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.791798   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.791826   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.791828   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.791833   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.791836   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.791845   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.791852   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.791850   12512 addons.go:475] Verifying addon metrics-server=true in "addons-307023"
	I0528 20:23:19.791916   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.791948   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.791956   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.794705   12512 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-307023 service yakd-dashboard -n yakd-dashboard
	
	I0528 20:23:19.792100   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.792120   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.792126   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.792144   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.792481   12512 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0528 20:23:19.796102   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.796134   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.812456   12512 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0528 20:23:19.812479   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:19.989241   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 20:23:20.246734   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:20.300461   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:20.759745   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:20.848521   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:21.016432   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.797633005s)
	I0528 20:23:21.016471   12512 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.971878962s)
	I0528 20:23:21.016488   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:21.016502   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:21.018267   12512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 20:23:21.016440   12512 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.960603649s)
	I0528 20:23:21.016854   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:21.016896   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:21.019698   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:21.019710   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:21.019722   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:21.019719   12512 api_server.go:72] duration metric: took 10.59091718s to wait for apiserver process to appear ...
	I0528 20:23:21.019736   12512 api_server.go:88] waiting for apiserver healthz status ...
	I0528 20:23:21.019763   12512 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0528 20:23:21.021343   12512 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0528 20:23:21.019954   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:21.019984   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:21.023213   12512 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0528 20:23:21.023222   12512 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0528 20:23:21.023248   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:21.023267   12512 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-307023"
	I0528 20:23:21.024737   12512 out.go:177] * Verifying csi-hostpath-driver addon...
	I0528 20:23:21.026730   12512 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0528 20:23:21.031083   12512 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I0528 20:23:21.032068   12512 api_server.go:141] control plane version: v1.30.1
	I0528 20:23:21.032084   12512 api_server.go:131] duration metric: took 12.337642ms to wait for apiserver health ...
	I0528 20:23:21.032091   12512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 20:23:21.048862   12512 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0528 20:23:21.048891   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:21.049187   12512 system_pods.go:59] 19 kube-system pods found
	I0528 20:23:21.049218   12512 system_pods.go:61] "coredns-7db6d8ff4d-hmjmn" [805eb200-abef-49e1-b441-570367fec5ad] Running
	I0528 20:23:21.049229   12512 system_pods.go:61] "coredns-7db6d8ff4d-p4qdk" [96cce9c7-26e9-4430-80e9-194c4a5c5dda] Running
	I0528 20:23:21.049240   12512 system_pods.go:61] "csi-hostpath-attacher-0" [b16bd4e8-843e-4529-9ba9-dce28f647e6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0528 20:23:21.049251   12512 system_pods.go:61] "csi-hostpath-resizer-0" [3b1d8f2b-28d1-4af5-ac5a-5b6f25719826] Pending
	I0528 20:23:21.049269   12512 system_pods.go:61] "csi-hostpathplugin-hlrts" [5595c3ca-5a1c-4c3c-9647-413836e28765] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0528 20:23:21.049286   12512 system_pods.go:61] "etcd-addons-307023" [af3e376d-a2c2-4316-ab43-053ff7264a31] Running
	I0528 20:23:21.049300   12512 system_pods.go:61] "kube-apiserver-addons-307023" [9e657315-1dc1-497d-95a9-dc4bd6d39d63] Running
	I0528 20:23:21.049308   12512 system_pods.go:61] "kube-controller-manager-addons-307023" [bca735b7-0408-4bab-90f4-0a4119c53722] Running
	I0528 20:23:21.049316   12512 system_pods.go:61] "kube-ingress-dns-minikube" [1f7b4e7c-b982-4c04-add7-525795548760] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0528 20:23:21.049324   12512 system_pods.go:61] "kube-proxy-zm9r7" [02de5251-8d15-4ee9-b99b-978c02f4f9c5] Running
	I0528 20:23:21.049334   12512 system_pods.go:61] "kube-scheduler-addons-307023" [98fe07e8-5d59-46a1-a938-37a1b030c5f5] Running
	I0528 20:23:21.049344   12512 system_pods.go:61] "metrics-server-c59844bb4-wjvkg" [a9aa82de-329c-4c74-bdc0-f304386c8ede] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 20:23:21.049356   12512 system_pods.go:61] "nvidia-device-plugin-daemonset-fw58d" [9a054b41-fa5f-4c2b-bac0-5e8f84e8860f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0528 20:23:21.049368   12512 system_pods.go:61] "registry-g8f66" [d44205f8-5d8f-4cb5-86a9-a06ec1a83ab3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0528 20:23:21.049390   12512 system_pods.go:61] "registry-proxy-6v96c" [c226957d-d70d-48ff-85a3-d800697e600d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0528 20:23:21.049412   12512 system_pods.go:61] "snapshot-controller-745499f584-hj8gg" [d46d2593-66fb-4cb3-a416-cd8c60b6e4df] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0528 20:23:21.049423   12512 system_pods.go:61] "snapshot-controller-745499f584-p8v2q" [45292627-a14e-42c3-8c4b-77094065b3de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0528 20:23:21.049431   12512 system_pods.go:61] "storage-provisioner" [91636457-a3cb-48a7-bfd4-58907cb354d4] Running
	I0528 20:23:21.049439   12512 system_pods.go:61] "tiller-deploy-6677d64bcd-9kf86" [2e6adf96-5773-4664-abee-77443509067d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0528 20:23:21.049449   12512 system_pods.go:74] duration metric: took 17.352384ms to wait for pod list to return data ...
	I0528 20:23:21.049463   12512 default_sa.go:34] waiting for default service account to be created ...
	I0528 20:23:21.073140   12512 default_sa.go:45] found service account: "default"
	I0528 20:23:21.073161   12512 default_sa.go:55] duration metric: took 23.688228ms for default service account to be created ...
	I0528 20:23:21.073168   12512 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 20:23:21.092080   12512 system_pods.go:86] 19 kube-system pods found
	I0528 20:23:21.092117   12512 system_pods.go:89] "coredns-7db6d8ff4d-hmjmn" [805eb200-abef-49e1-b441-570367fec5ad] Running
	I0528 20:23:21.092127   12512 system_pods.go:89] "coredns-7db6d8ff4d-p4qdk" [96cce9c7-26e9-4430-80e9-194c4a5c5dda] Running
	I0528 20:23:21.092137   12512 system_pods.go:89] "csi-hostpath-attacher-0" [b16bd4e8-843e-4529-9ba9-dce28f647e6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0528 20:23:21.092164   12512 system_pods.go:89] "csi-hostpath-resizer-0" [3b1d8f2b-28d1-4af5-ac5a-5b6f25719826] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0528 20:23:21.092182   12512 system_pods.go:89] "csi-hostpathplugin-hlrts" [5595c3ca-5a1c-4c3c-9647-413836e28765] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0528 20:23:21.092190   12512 system_pods.go:89] "etcd-addons-307023" [af3e376d-a2c2-4316-ab43-053ff7264a31] Running
	I0528 20:23:21.092201   12512 system_pods.go:89] "kube-apiserver-addons-307023" [9e657315-1dc1-497d-95a9-dc4bd6d39d63] Running
	I0528 20:23:21.092211   12512 system_pods.go:89] "kube-controller-manager-addons-307023" [bca735b7-0408-4bab-90f4-0a4119c53722] Running
	I0528 20:23:21.092223   12512 system_pods.go:89] "kube-ingress-dns-minikube" [1f7b4e7c-b982-4c04-add7-525795548760] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0528 20:23:21.092239   12512 system_pods.go:89] "kube-proxy-zm9r7" [02de5251-8d15-4ee9-b99b-978c02f4f9c5] Running
	I0528 20:23:21.092251   12512 system_pods.go:89] "kube-scheduler-addons-307023" [98fe07e8-5d59-46a1-a938-37a1b030c5f5] Running
	I0528 20:23:21.092269   12512 system_pods.go:89] "metrics-server-c59844bb4-wjvkg" [a9aa82de-329c-4c74-bdc0-f304386c8ede] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 20:23:21.092282   12512 system_pods.go:89] "nvidia-device-plugin-daemonset-fw58d" [9a054b41-fa5f-4c2b-bac0-5e8f84e8860f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0528 20:23:21.092294   12512 system_pods.go:89] "registry-g8f66" [d44205f8-5d8f-4cb5-86a9-a06ec1a83ab3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0528 20:23:21.092306   12512 system_pods.go:89] "registry-proxy-6v96c" [c226957d-d70d-48ff-85a3-d800697e600d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0528 20:23:21.092317   12512 system_pods.go:89] "snapshot-controller-745499f584-hj8gg" [d46d2593-66fb-4cb3-a416-cd8c60b6e4df] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0528 20:23:21.092328   12512 system_pods.go:89] "snapshot-controller-745499f584-p8v2q" [45292627-a14e-42c3-8c4b-77094065b3de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0528 20:23:21.092338   12512 system_pods.go:89] "storage-provisioner" [91636457-a3cb-48a7-bfd4-58907cb354d4] Running
	I0528 20:23:21.092349   12512 system_pods.go:89] "tiller-deploy-6677d64bcd-9kf86" [2e6adf96-5773-4664-abee-77443509067d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0528 20:23:21.092360   12512 system_pods.go:126] duration metric: took 19.18558ms to wait for k8s-apps to be running ...
	I0528 20:23:21.092374   12512 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 20:23:21.092423   12512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:23:21.125643   12512 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0528 20:23:21.125668   12512 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0528 20:23:21.175294   12512 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0528 20:23:21.175317   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0528 20:23:21.250803   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:21.252824   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0528 20:23:21.301200   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:21.533176   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:21.747364   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:21.803484   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:22.034172   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:22.123594   12512 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.031141088s)
	I0528 20:23:22.123630   12512 system_svc.go:56] duration metric: took 1.031254848s WaitForService to wait for kubelet
	I0528 20:23:22.123637   12512 kubeadm.go:576] duration metric: took 11.694839227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:23:22.123655   12512 node_conditions.go:102] verifying NodePressure condition ...
	I0528 20:23:22.123938   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.134639499s)
	I0528 20:23:22.124005   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:22.124024   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:22.124330   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:22.124399   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:22.124414   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:22.124428   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:22.124439   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:22.125327   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:22.125356   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:22.125373   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:22.127236   12512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 20:23:22.127262   12512 node_conditions.go:123] node cpu capacity is 2
	I0528 20:23:22.127274   12512 node_conditions.go:105] duration metric: took 3.614226ms to run NodePressure ...
	I0528 20:23:22.127288   12512 start.go:240] waiting for startup goroutines ...
	I0528 20:23:22.247379   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:22.301087   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:22.540831   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:22.758244   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:22.783448   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.530586876s)
	I0528 20:23:22.783510   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:22.783525   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:22.783825   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:22.783838   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:22.783845   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:22.783855   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:22.783863   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:22.784109   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:22.784160   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:22.784140   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:22.785281   12512 addons.go:475] Verifying addon gcp-auth=true in "addons-307023"
	I0528 20:23:22.786985   12512 out.go:177] * Verifying gcp-auth addon...
	I0528 20:23:22.789277   12512 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0528 20:23:22.820039   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:22.833349   12512 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0528 20:23:22.833376   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:23.032406   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:23.249396   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:23.295025   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:23.304642   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:23.531692   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:23.746736   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:23.793438   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:23.799588   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:24.031727   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:24.246444   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:24.293020   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:24.300117   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:24.533635   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:24.747410   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:24.793517   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:24.800062   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:25.033625   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:25.247227   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:25.293599   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:25.300270   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:25.532832   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:25.747672   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:25.792995   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:25.802024   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:26.032296   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:26.247028   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:26.294932   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:26.300310   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:26.618928   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:26.746812   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:26.794004   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:26.800545   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:27.032170   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:27.246753   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:27.292418   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:27.300268   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:27.533968   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:27.746417   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:27.794077   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:27.801539   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:28.031793   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:28.248817   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:28.293444   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:28.299620   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:28.532373   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:28.747756   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:28.792575   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:28.800233   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:29.032153   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:29.247541   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:29.293639   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:29.299631   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:29.532535   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:29.748755   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:29.793461   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:29.799362   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:30.034083   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:30.247320   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:30.293494   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:30.299878   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:30.531812   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:30.748423   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:30.793875   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:30.799689   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:31.032425   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:31.248326   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:31.293269   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:31.301083   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:31.532665   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:31.748277   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:31.794798   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:31.802919   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:32.032257   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:32.247021   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:32.292544   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:32.300006   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:32.532576   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:32.747388   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:32.793822   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:32.800298   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:33.033127   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:33.246841   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:33.293639   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:33.300018   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:33.533819   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:33.748051   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:33.793562   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:33.799601   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:34.031898   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:34.246538   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:34.293276   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:34.302411   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:34.534869   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:35.074731   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:35.077352   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:35.077609   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:35.078038   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:35.247209   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:35.293996   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:35.300397   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:35.532507   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:35.746181   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:35.792838   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:35.800254   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:36.032290   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:36.247032   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:36.292761   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:36.300411   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:36.532632   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:36.756098   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:36.839908   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:36.841263   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:37.208432   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:37.247160   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:37.293327   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:37.300973   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:37.533445   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:37.746806   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:37.792680   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:37.799609   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:38.036747   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:38.250834   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:38.293113   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:38.300086   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:38.532890   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:38.745963   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:38.792747   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:38.800181   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:39.032507   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:39.247241   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:39.293460   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:39.299894   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:39.533582   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:39.747135   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:39.792613   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:39.799829   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:40.032544   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:40.247078   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:40.292654   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:40.300059   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:40.532296   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:40.747338   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:40.792800   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:40.800651   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:41.034756   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:41.246819   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:41.293549   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:41.300048   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:41.531953   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:41.747243   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:41.792738   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:41.800149   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:42.033384   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:42.248288   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:42.324350   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:42.327631   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:42.531529   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:42.747515   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:42.793065   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:42.800621   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:43.032261   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:43.247382   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:43.294451   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:43.300122   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:43.532479   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:43.747415   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:43.793229   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:43.800721   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:44.032603   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:44.246433   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:44.293617   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:44.301331   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:44.534585   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:44.747838   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:44.792855   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:44.811757   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:45.033538   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:45.246703   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:45.293257   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:45.300777   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:45.536290   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:45.747297   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:45.793011   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:45.801296   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:46.032099   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:46.246870   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:46.293119   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:46.300381   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:46.532654   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:46.759549   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:46.794263   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:46.802282   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:47.031867   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:47.691122   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:47.693189   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:47.697034   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:47.697420   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:47.747438   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:47.792690   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:47.799816   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:48.032865   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:48.246245   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:48.293522   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:48.299813   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:48.531993   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:48.746665   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:48.793382   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:48.799754   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:49.032569   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:49.246967   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:49.293501   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:49.300428   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:49.532336   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:49.746874   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:49.794315   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:49.802671   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:50.033162   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:50.246485   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:50.293576   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:50.300276   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:50.533141   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:50.746879   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:50.792933   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:50.800641   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:51.032283   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:51.246760   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:51.292384   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:51.299275   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:51.539650   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:51.747066   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:51.792832   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:51.799945   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:52.032531   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:52.249997   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:52.293128   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:52.301514   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:52.532416   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:52.747431   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:52.793471   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:52.799719   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:53.032415   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:53.247347   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:53.293187   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:53.300393   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:53.534536   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:53.747368   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:53.793491   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:53.800038   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:54.033071   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:54.247960   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:54.292740   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:54.300241   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:54.536093   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:54.746251   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:54.793282   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:54.800531   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:55.032279   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:55.247255   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:55.293564   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:55.300068   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:55.532970   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:55.747106   12512 kapi.go:107] duration metric: took 40.004999177s to wait for kubernetes.io/minikube-addons=registry ...
	I0528 20:23:55.792967   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:55.800577   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:56.033059   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:56.293920   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:56.300772   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:56.532844   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:56.793892   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:56.800356   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:57.032569   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:57.295057   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:57.300965   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:57.533021   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:58.016939   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:58.017877   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:58.031623   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:58.294201   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:58.300742   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:58.533403   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:58.792877   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:58.802336   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:59.036272   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:59.292892   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:59.306951   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:59.531943   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:59.792922   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:59.800072   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:00.032901   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:00.292561   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:00.299849   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:00.532538   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:00.794023   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:00.803706   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:01.032511   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:01.293520   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:01.300080   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:01.533182   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:01.793876   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:01.800092   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:02.032464   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:02.292853   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:02.312156   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:02.533177   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:02.792910   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:02.800036   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:03.034629   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:03.293381   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:03.299886   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:03.533568   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:03.793345   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:03.799612   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:04.032491   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:04.348417   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:04.351673   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:04.531946   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:04.792845   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:04.802137   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:05.032712   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:05.293319   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:05.301182   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:05.531863   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:05.793047   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:05.801234   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:06.031986   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:06.292870   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:06.300463   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:06.533029   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:06.792678   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:06.800075   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:07.032966   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:07.292719   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:07.300558   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:07.532018   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:07.792358   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:07.799596   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:08.032612   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:08.296004   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:08.300298   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:08.663981   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:08.793608   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:08.800055   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:09.033700   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:09.293102   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:09.300679   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:09.532549   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:09.793007   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:09.800426   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:10.033215   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:10.292810   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:10.300155   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:10.532677   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:10.793238   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:10.800572   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:11.032528   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:11.293265   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:11.300932   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:11.533708   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:11.793191   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:11.800673   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:12.032635   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:12.458278   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:12.458565   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:12.540137   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:12.792699   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:12.800206   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:13.033128   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:13.294010   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:13.300429   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:13.533076   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:13.792132   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:13.800445   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:14.265535   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:14.295347   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:14.301157   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:14.532300   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:14.792938   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:14.800402   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:15.036743   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:15.293338   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:15.301060   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:15.532762   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:15.793248   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:15.799838   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:16.032469   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:16.293417   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:16.299692   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:16.534033   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:16.793267   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:16.800958   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:17.031867   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:17.293002   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:17.300559   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:17.534381   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:17.793397   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:17.799674   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:18.035982   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:18.293362   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:18.301234   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:18.532613   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:18.793651   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:18.799855   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:19.032285   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:19.293251   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:19.300747   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:19.532934   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:19.793035   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:19.800146   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:20.032834   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:20.293187   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:20.301002   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:20.531758   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:20.792566   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:20.799813   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:21.032052   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:21.292952   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:21.300903   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:21.533525   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:21.795069   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:21.801159   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:22.033177   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:22.293381   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:22.304485   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:22.532079   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:22.792115   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:22.800865   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:23.033016   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:23.293718   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:23.300364   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:23.534522   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:23.796844   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:23.808157   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:24.037080   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:24.293022   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:24.300357   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:24.532748   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:24.793869   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:24.799951   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:25.032397   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:25.292930   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:25.300629   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:25.532667   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:25.794617   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:25.800810   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:26.032555   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:26.293069   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:26.307137   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:26.537354   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:26.793537   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:26.800503   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:27.032281   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:27.298353   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:27.301569   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:27.531721   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:27.794543   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:27.800374   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:28.040512   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:28.292768   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:28.300432   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:28.534684   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:28.794907   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:28.802809   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:29.033815   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:29.293418   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:29.301085   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:29.532455   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:29.793249   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:29.802694   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:30.032712   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:30.595748   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:30.596674   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:30.599440   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:30.793036   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:30.800373   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:31.037660   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:31.293465   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:31.299558   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:31.532874   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:31.792543   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:31.800215   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:32.032426   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:32.294702   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:32.302778   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:32.533941   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:32.793906   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:32.799268   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:33.031898   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:33.295132   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:33.300645   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:33.533370   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:33.795073   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:33.801748   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:34.032050   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:34.294460   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:34.299328   12512 kapi.go:107] duration metric: took 1m14.506847113s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0528 20:24:34.532579   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:34.794014   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:35.032225   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:35.293845   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:35.532835   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:35.793381   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:36.032823   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:36.293157   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:36.532279   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:36.793024   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:37.034638   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:37.293251   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:37.532496   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:37.794951   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:38.034377   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:38.293068   12512 kapi.go:107] duration metric: took 1m15.503786906s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0528 20:24:38.294794   12512 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-307023 cluster.
	I0528 20:24:38.296247   12512 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0528 20:24:38.297551   12512 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0528 20:24:38.532566   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:39.033654   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:39.532193   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:40.031824   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:40.532663   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:41.032846   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:41.540537   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:42.032689   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:42.532771   12512 kapi.go:107] duration metric: took 1m21.50603798s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0528 20:24:42.534638   12512 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, helm-tiller, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0528 20:24:42.535872   12512 addons.go:510] duration metric: took 1m32.107045068s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner storage-provisioner-rancher metrics-server inspektor-gadget yakd helm-tiller volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0528 20:24:42.535913   12512 start.go:245] waiting for cluster config update ...
	I0528 20:24:42.535929   12512 start.go:254] writing updated cluster config ...
	I0528 20:24:42.536249   12512 ssh_runner.go:195] Run: rm -f paused
	I0528 20:24:42.587087   12512 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 20:24:42.589127   12512 out.go:177] * Done! kubectl is now configured to use "addons-307023" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.133969721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716928068133941136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584528,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5477f98-1ac5-4547-bbd5-d3197e0932c0 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.134684303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ee71a43-dc32-4324-9ae3-302961df32ed name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.134739456Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ee71a43-dc32-4324-9ae3-302961df32ed name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.135118108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44e852d22ba1819ddf0ebf3d0807c81eb34686152ec8c0d2d28504332322f910,PodSandboxId:bf42055627c5f778e09c307e0d382b8cd97e98b2436484afb7ff68aaa6095122,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716928061153443230,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-rrlcz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e12ad3-6c87-49f8-b726-fe0fbe89ae4a,},Annotations:map[string]string{io.kubernetes.container.hash: 6ff1a144,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25b089b47c42d9c78f20b6b9c92fae06be840b62420d791663b1b24bc7309f5,PodSandboxId:0763ad6c0dc6d8501a1652e1f80c871817876837137898af4b5567b1887c73da,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716927918822003784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 827204ae-e6ad-4624-87ec-f215a8cd56dd,},Annotations:map[string]string{io.kubern
etes.container.hash: a5c3bd23,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b252fd37aa9f5d12c758c9f92d1d30b108d86442b0cc874ea70f6bbcb4652fd,PodSandboxId:cf610ed316048dec99893c84336ba42640572f6ea101da9f9c37e8a3027e281b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716927889296922002,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-jtz8c,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: b1e41c1e-f373-4b51-9cf7-70350652cb99,},Annotations:map[string]string{io.kubernetes.container.hash: 737cf372,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea96afa17ea5229968ae5126d5b87ebacc473587a5b9b920907f3b088d2db71b,PodSandboxId:cbafe4ece952b44f2d401289fbd0398cb5d2750747349300574a5cf49a56c635,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716927878035455173,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9zg48,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 549f8b18-adb3-46d7-b9d6-66982b3a6ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 8848d3f9,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4befefa5042be2982516cb700b3a4031a58d2b758c6a0c52516c31968ae3c1dc,PodSandboxId:3cb404e14430a3fcfae3f99d63a4353268180194444455c12b62c08b04cce7bf,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716927862088624098,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4l9pr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d9d7a71a-f958-4ea6-851a-6268d18ef3c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2b61162d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ed0d6afef331c5b29ff51d1aee27830111aee694135f0db39ba20ffa10bac6,PodSandboxId:72036fc504f19d863774985074631a2a3f63ae3eb49adae5886ab66c1ad54bcd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:
1716927861929656720,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xjswd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4aee838f-cff6-4b4e-9b37-08bd8d3cdbc2,},Annotations:map[string]string{io.kubernetes.container.hash: afc04852,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8b8e68eeb825dbfe1b66ff7ec29f0c1031fe1dcf332aa48ad22deb75ffb888,PodSandboxId:83d262bedf4d9e8087c85dcc670607f92c478fd2eaaf04bc77071634c2e71df1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1716927850307483582,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-wpxcg,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a9c0d228-38f8-4c7f-99d8-bd87c9f25ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 55c66455,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355a251e3cf31eb25dddce88b95ad2c82fec40efb40b26bf7d9a5ecbe490c54d,PodSandboxId:adc08090883c6d3e03290eecc1f5dc06e0ba00cc5efb1a80b0cc621418111219,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716927841699867655,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-wjvkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa82de-329c-4c74-bdc0-f304386c8ede,},Annotations:map[string]string{io.kubernetes.container.hash: 36569b19,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c00a4fc421b3f7ba35f835a5ffb3586ed187a922a41a09e7d62310f40f28b4a,PodSandboxId:9c0d3246c219251ca87a4a7ec1763e4dd2e73259bee9e42c74b7c5be81800259,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716927797215259911,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91636457-a3cb-48a7-bfd4-58907cb354d4,},Annotations:map[string]string{io.kubernetes.container.hash: cc7bb4e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fc52623b4367389165518a9a3ad68d3a6dfd0bea62fee6e8dd4c68333d1d46,PodSandboxId:7c5c0b887193c90d66def7ae7eb242acb7366790eb1927bd6fa5dcbb3ed48e17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a67
4fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716927793886249131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmjmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805eb200-abef-49e1-b441-570367fec5ad,},Annotations:map[string]string{io.kubernetes.container.hash: f422ecb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee29a48aa62269f9e1e3af0e1a3bcef0cc8ba7014553ab82c93aaf12
8c2b3c2d,PodSandboxId:2495266d97bb1d5cb4f6cd6f29256ca074665a7c52909c4e509b0dfb148e7f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716927791278495326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zm9r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02de5251-8d15-4ee9-b99b-978c02f4f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: c31dc7d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3a1afe43af2cdcd45e85e5e47b2e24cc33c6f39bd25773f3d79aa3320f7d35,PodSandboxId:fc0fa40ac15
ca4b2d833bff5a29b6698162423e2e5cf82e008671835e01bb941,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716927770428301949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4386d3c845bcc94595a3690ec06fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 7722ad34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d2d00755e2d2aa085fc127e00bff5cae198357367f1612c5a00654687283113,PodSandboxId:0881e09a01ed0effac1090fa172d7d93d343d9b37961684879dfd734a6
0797c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716927770398708161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f11cf3c88298cfc595782890de812176,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a2635bc2ea98051fa0060e6982cdbc878403ecd774b69a9700979890c020db,PodSandboxId:780d40c6d09cdb0796a0b522be0886c63e7d6f19d831fdbb49f94404e4680173,Metadata:&
ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716927770369120842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ce70e57fa105e011ca6bdbe769de6c,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ea69bdbd09c4bf83134ec7a0266e2b89e83fba1e72332118c62196e4802c2fd,PodSandboxId:9c890007588d6da51a68d5745609454c364d5ae51919f8cc5cc222a0de66a20e,
Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716927770361963110,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9329d3d3dd989369304d748209ebae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6f4bb928,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ee71a43-dc32-4324-9ae3-302961df32ed name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.177286492Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4434c81-8d41-41db-ad80-fb878da43356 name=/runtime.v1.RuntimeService/Version
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.177377563Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4434c81-8d41-41db-ad80-fb878da43356 name=/runtime.v1.RuntimeService/Version
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.178612744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c343f12-5f5c-46b3-95bc-5aaa4cf08a04 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.179971392Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716928068179944456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584528,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c343f12-5f5c-46b3-95bc-5aaa4cf08a04 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.180474815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7eb561aa-96cc-4daa-8c54-e5d41c3ca15a name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.180542925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7eb561aa-96cc-4daa-8c54-e5d41c3ca15a name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.180961853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44e852d22ba1819ddf0ebf3d0807c81eb34686152ec8c0d2d28504332322f910,PodSandboxId:bf42055627c5f778e09c307e0d382b8cd97e98b2436484afb7ff68aaa6095122,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716928061153443230,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-rrlcz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e12ad3-6c87-49f8-b726-fe0fbe89ae4a,},Annotations:map[string]string{io.kubernetes.container.hash: 6ff1a144,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25b089b47c42d9c78f20b6b9c92fae06be840b62420d791663b1b24bc7309f5,PodSandboxId:0763ad6c0dc6d8501a1652e1f80c871817876837137898af4b5567b1887c73da,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716927918822003784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 827204ae-e6ad-4624-87ec-f215a8cd56dd,},Annotations:map[string]string{io.kubern
etes.container.hash: a5c3bd23,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b252fd37aa9f5d12c758c9f92d1d30b108d86442b0cc874ea70f6bbcb4652fd,PodSandboxId:cf610ed316048dec99893c84336ba42640572f6ea101da9f9c37e8a3027e281b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716927889296922002,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-jtz8c,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: b1e41c1e-f373-4b51-9cf7-70350652cb99,},Annotations:map[string]string{io.kubernetes.container.hash: 737cf372,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea96afa17ea5229968ae5126d5b87ebacc473587a5b9b920907f3b088d2db71b,PodSandboxId:cbafe4ece952b44f2d401289fbd0398cb5d2750747349300574a5cf49a56c635,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716927878035455173,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9zg48,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 549f8b18-adb3-46d7-b9d6-66982b3a6ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 8848d3f9,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4befefa5042be2982516cb700b3a4031a58d2b758c6a0c52516c31968ae3c1dc,PodSandboxId:3cb404e14430a3fcfae3f99d63a4353268180194444455c12b62c08b04cce7bf,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716927862088624098,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4l9pr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d9d7a71a-f958-4ea6-851a-6268d18ef3c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2b61162d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ed0d6afef331c5b29ff51d1aee27830111aee694135f0db39ba20ffa10bac6,PodSandboxId:72036fc504f19d863774985074631a2a3f63ae3eb49adae5886ab66c1ad54bcd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:
1716927861929656720,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xjswd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4aee838f-cff6-4b4e-9b37-08bd8d3cdbc2,},Annotations:map[string]string{io.kubernetes.container.hash: afc04852,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8b8e68eeb825dbfe1b66ff7ec29f0c1031fe1dcf332aa48ad22deb75ffb888,PodSandboxId:83d262bedf4d9e8087c85dcc670607f92c478fd2eaaf04bc77071634c2e71df1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1716927850307483582,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-wpxcg,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a9c0d228-38f8-4c7f-99d8-bd87c9f25ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 55c66455,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355a251e3cf31eb25dddce88b95ad2c82fec40efb40b26bf7d9a5ecbe490c54d,PodSandboxId:adc08090883c6d3e03290eecc1f5dc06e0ba00cc5efb1a80b0cc621418111219,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716927841699867655,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-wjvkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa82de-329c-4c74-bdc0-f304386c8ede,},Annotations:map[string]string{io.kubernetes.container.hash: 36569b19,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c00a4fc421b3f7ba35f835a5ffb3586ed187a922a41a09e7d62310f40f28b4a,PodSandboxId:9c0d3246c219251ca87a4a7ec1763e4dd2e73259bee9e42c74b7c5be81800259,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716927797215259911,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91636457-a3cb-48a7-bfd4-58907cb354d4,},Annotations:map[string]string{io.kubernetes.container.hash: cc7bb4e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fc52623b4367389165518a9a3ad68d3a6dfd0bea62fee6e8dd4c68333d1d46,PodSandboxId:7c5c0b887193c90d66def7ae7eb242acb7366790eb1927bd6fa5dcbb3ed48e17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a67
4fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716927793886249131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmjmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805eb200-abef-49e1-b441-570367fec5ad,},Annotations:map[string]string{io.kubernetes.container.hash: f422ecb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee29a48aa62269f9e1e3af0e1a3bcef0cc8ba7014553ab82c93aaf12
8c2b3c2d,PodSandboxId:2495266d97bb1d5cb4f6cd6f29256ca074665a7c52909c4e509b0dfb148e7f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716927791278495326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zm9r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02de5251-8d15-4ee9-b99b-978c02f4f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: c31dc7d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3a1afe43af2cdcd45e85e5e47b2e24cc33c6f39bd25773f3d79aa3320f7d35,PodSandboxId:fc0fa40ac15
ca4b2d833bff5a29b6698162423e2e5cf82e008671835e01bb941,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716927770428301949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4386d3c845bcc94595a3690ec06fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 7722ad34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d2d00755e2d2aa085fc127e00bff5cae198357367f1612c5a00654687283113,PodSandboxId:0881e09a01ed0effac1090fa172d7d93d343d9b37961684879dfd734a6
0797c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716927770398708161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f11cf3c88298cfc595782890de812176,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a2635bc2ea98051fa0060e6982cdbc878403ecd774b69a9700979890c020db,PodSandboxId:780d40c6d09cdb0796a0b522be0886c63e7d6f19d831fdbb49f94404e4680173,Metadata:&
ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716927770369120842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ce70e57fa105e011ca6bdbe769de6c,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ea69bdbd09c4bf83134ec7a0266e2b89e83fba1e72332118c62196e4802c2fd,PodSandboxId:9c890007588d6da51a68d5745609454c364d5ae51919f8cc5cc222a0de66a20e,
Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716927770361963110,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9329d3d3dd989369304d748209ebae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6f4bb928,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7eb561aa-96cc-4daa-8c54-e5d41c3ca15a name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.214753997Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81cc0612-a1fe-49bf-9214-5f9c1f0ba47c name=/runtime.v1.RuntimeService/Version
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.214884739Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81cc0612-a1fe-49bf-9214-5f9c1f0ba47c name=/runtime.v1.RuntimeService/Version
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.216429210Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bfe4ae65-bfb3-47cf-9750-778d0d4d943c name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.217673285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716928068217648417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584528,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bfe4ae65-bfb3-47cf-9750-778d0d4d943c name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.218313525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97a16584-21d4-4285-82d3-bf4616332400 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.218385484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97a16584-21d4-4285-82d3-bf4616332400 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.218703969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44e852d22ba1819ddf0ebf3d0807c81eb34686152ec8c0d2d28504332322f910,PodSandboxId:bf42055627c5f778e09c307e0d382b8cd97e98b2436484afb7ff68aaa6095122,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716928061153443230,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-rrlcz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e12ad3-6c87-49f8-b726-fe0fbe89ae4a,},Annotations:map[string]string{io.kubernetes.container.hash: 6ff1a144,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25b089b47c42d9c78f20b6b9c92fae06be840b62420d791663b1b24bc7309f5,PodSandboxId:0763ad6c0dc6d8501a1652e1f80c871817876837137898af4b5567b1887c73da,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716927918822003784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 827204ae-e6ad-4624-87ec-f215a8cd56dd,},Annotations:map[string]string{io.kubern
etes.container.hash: a5c3bd23,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b252fd37aa9f5d12c758c9f92d1d30b108d86442b0cc874ea70f6bbcb4652fd,PodSandboxId:cf610ed316048dec99893c84336ba42640572f6ea101da9f9c37e8a3027e281b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716927889296922002,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-jtz8c,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: b1e41c1e-f373-4b51-9cf7-70350652cb99,},Annotations:map[string]string{io.kubernetes.container.hash: 737cf372,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea96afa17ea5229968ae5126d5b87ebacc473587a5b9b920907f3b088d2db71b,PodSandboxId:cbafe4ece952b44f2d401289fbd0398cb5d2750747349300574a5cf49a56c635,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716927878035455173,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9zg48,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 549f8b18-adb3-46d7-b9d6-66982b3a6ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 8848d3f9,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4befefa5042be2982516cb700b3a4031a58d2b758c6a0c52516c31968ae3c1dc,PodSandboxId:3cb404e14430a3fcfae3f99d63a4353268180194444455c12b62c08b04cce7bf,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716927862088624098,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4l9pr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d9d7a71a-f958-4ea6-851a-6268d18ef3c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2b61162d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ed0d6afef331c5b29ff51d1aee27830111aee694135f0db39ba20ffa10bac6,PodSandboxId:72036fc504f19d863774985074631a2a3f63ae3eb49adae5886ab66c1ad54bcd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:
1716927861929656720,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xjswd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4aee838f-cff6-4b4e-9b37-08bd8d3cdbc2,},Annotations:map[string]string{io.kubernetes.container.hash: afc04852,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8b8e68eeb825dbfe1b66ff7ec29f0c1031fe1dcf332aa48ad22deb75ffb888,PodSandboxId:83d262bedf4d9e8087c85dcc670607f92c478fd2eaaf04bc77071634c2e71df1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1716927850307483582,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-wpxcg,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a9c0d228-38f8-4c7f-99d8-bd87c9f25ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 55c66455,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355a251e3cf31eb25dddce88b95ad2c82fec40efb40b26bf7d9a5ecbe490c54d,PodSandboxId:adc08090883c6d3e03290eecc1f5dc06e0ba00cc5efb1a80b0cc621418111219,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716927841699867655,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-wjvkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa82de-329c-4c74-bdc0-f304386c8ede,},Annotations:map[string]string{io.kubernetes.container.hash: 36569b19,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c00a4fc421b3f7ba35f835a5ffb3586ed187a922a41a09e7d62310f40f28b4a,PodSandboxId:9c0d3246c219251ca87a4a7ec1763e4dd2e73259bee9e42c74b7c5be81800259,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716927797215259911,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91636457-a3cb-48a7-bfd4-58907cb354d4,},Annotations:map[string]string{io.kubernetes.container.hash: cc7bb4e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fc52623b4367389165518a9a3ad68d3a6dfd0bea62fee6e8dd4c68333d1d46,PodSandboxId:7c5c0b887193c90d66def7ae7eb242acb7366790eb1927bd6fa5dcbb3ed48e17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a67
4fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716927793886249131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmjmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805eb200-abef-49e1-b441-570367fec5ad,},Annotations:map[string]string{io.kubernetes.container.hash: f422ecb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee29a48aa62269f9e1e3af0e1a3bcef0cc8ba7014553ab82c93aaf12
8c2b3c2d,PodSandboxId:2495266d97bb1d5cb4f6cd6f29256ca074665a7c52909c4e509b0dfb148e7f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716927791278495326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zm9r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02de5251-8d15-4ee9-b99b-978c02f4f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: c31dc7d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3a1afe43af2cdcd45e85e5e47b2e24cc33c6f39bd25773f3d79aa3320f7d35,PodSandboxId:fc0fa40ac15
ca4b2d833bff5a29b6698162423e2e5cf82e008671835e01bb941,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716927770428301949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4386d3c845bcc94595a3690ec06fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 7722ad34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d2d00755e2d2aa085fc127e00bff5cae198357367f1612c5a00654687283113,PodSandboxId:0881e09a01ed0effac1090fa172d7d93d343d9b37961684879dfd734a6
0797c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716927770398708161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f11cf3c88298cfc595782890de812176,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a2635bc2ea98051fa0060e6982cdbc878403ecd774b69a9700979890c020db,PodSandboxId:780d40c6d09cdb0796a0b522be0886c63e7d6f19d831fdbb49f94404e4680173,Metadata:&
ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716927770369120842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ce70e57fa105e011ca6bdbe769de6c,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ea69bdbd09c4bf83134ec7a0266e2b89e83fba1e72332118c62196e4802c2fd,PodSandboxId:9c890007588d6da51a68d5745609454c364d5ae51919f8cc5cc222a0de66a20e,
Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716927770361963110,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9329d3d3dd989369304d748209ebae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6f4bb928,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97a16584-21d4-4285-82d3-bf4616332400 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.257947650Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6eba78fa-d36b-4ca1-8eb9-8fe0cf560294 name=/runtime.v1.RuntimeService/Version
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.258041623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6eba78fa-d36b-4ca1-8eb9-8fe0cf560294 name=/runtime.v1.RuntimeService/Version
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.259221306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d12373e-c05f-4e4f-a655-c95b2a36d8fb name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.260477475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716928068260448943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584528,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d12373e-c05f-4e4f-a655-c95b2a36d8fb name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.261180634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab781e37-d8e7-48a3-bc48-cbbd25c94220 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.261230825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab781e37-d8e7-48a3-bc48-cbbd25c94220 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:27:48 addons-307023 crio[678]: time="2024-05-28 20:27:48.261605476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44e852d22ba1819ddf0ebf3d0807c81eb34686152ec8c0d2d28504332322f910,PodSandboxId:bf42055627c5f778e09c307e0d382b8cd97e98b2436484afb7ff68aaa6095122,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716928061153443230,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-rrlcz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e12ad3-6c87-49f8-b726-fe0fbe89ae4a,},Annotations:map[string]string{io.kubernetes.container.hash: 6ff1a144,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25b089b47c42d9c78f20b6b9c92fae06be840b62420d791663b1b24bc7309f5,PodSandboxId:0763ad6c0dc6d8501a1652e1f80c871817876837137898af4b5567b1887c73da,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716927918822003784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 827204ae-e6ad-4624-87ec-f215a8cd56dd,},Annotations:map[string]string{io.kubern
etes.container.hash: a5c3bd23,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b252fd37aa9f5d12c758c9f92d1d30b108d86442b0cc874ea70f6bbcb4652fd,PodSandboxId:cf610ed316048dec99893c84336ba42640572f6ea101da9f9c37e8a3027e281b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716927889296922002,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-jtz8c,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: b1e41c1e-f373-4b51-9cf7-70350652cb99,},Annotations:map[string]string{io.kubernetes.container.hash: 737cf372,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea96afa17ea5229968ae5126d5b87ebacc473587a5b9b920907f3b088d2db71b,PodSandboxId:cbafe4ece952b44f2d401289fbd0398cb5d2750747349300574a5cf49a56c635,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716927878035455173,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9zg48,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 549f8b18-adb3-46d7-b9d6-66982b3a6ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 8848d3f9,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4befefa5042be2982516cb700b3a4031a58d2b758c6a0c52516c31968ae3c1dc,PodSandboxId:3cb404e14430a3fcfae3f99d63a4353268180194444455c12b62c08b04cce7bf,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1716927862088624098,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4l9pr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d9d7a71a-f958-4ea6-851a-6268d18ef3c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2b61162d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3ed0d6afef331c5b29ff51d1aee27830111aee694135f0db39ba20ffa10bac6,PodSandboxId:72036fc504f19d863774985074631a2a3f63ae3eb49adae5886ab66c1ad54bcd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:
1716927861929656720,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xjswd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4aee838f-cff6-4b4e-9b37-08bd8d3cdbc2,},Annotations:map[string]string{io.kubernetes.container.hash: afc04852,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8b8e68eeb825dbfe1b66ff7ec29f0c1031fe1dcf332aa48ad22deb75ffb888,PodSandboxId:83d262bedf4d9e8087c85dcc670607f92c478fd2eaaf04bc77071634c2e71df1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1716927850307483582,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-wpxcg,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a9c0d228-38f8-4c7f-99d8-bd87c9f25ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 55c66455,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355a251e3cf31eb25dddce88b95ad2c82fec40efb40b26bf7d9a5ecbe490c54d,PodSandboxId:adc08090883c6d3e03290eecc1f5dc06e0ba00cc5efb1a80b0cc621418111219,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716927841699867655,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-wjvkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa82de-329c-4c74-bdc0-f304386c8ede,},Annotations:map[string]string{io.kubernetes.container.hash: 36569b19,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c00a4fc421b3f7ba35f835a5ffb3586ed187a922a41a09e7d62310f40f28b4a,PodSandboxId:9c0d3246c219251ca87a4a7ec1763e4dd2e73259bee9e42c74b7c5be81800259,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716927797215259911,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91636457-a3cb-48a7-bfd4-58907cb354d4,},Annotations:map[string]string{io.kubernetes.container.hash: cc7bb4e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fc52623b4367389165518a9a3ad68d3a6dfd0bea62fee6e8dd4c68333d1d46,PodSandboxId:7c5c0b887193c90d66def7ae7eb242acb7366790eb1927bd6fa5dcbb3ed48e17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a67
4fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716927793886249131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmjmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805eb200-abef-49e1-b441-570367fec5ad,},Annotations:map[string]string{io.kubernetes.container.hash: f422ecb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee29a48aa62269f9e1e3af0e1a3bcef0cc8ba7014553ab82c93aaf12
8c2b3c2d,PodSandboxId:2495266d97bb1d5cb4f6cd6f29256ca074665a7c52909c4e509b0dfb148e7f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716927791278495326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zm9r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02de5251-8d15-4ee9-b99b-978c02f4f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: c31dc7d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3a1afe43af2cdcd45e85e5e47b2e24cc33c6f39bd25773f3d79aa3320f7d35,PodSandboxId:fc0fa40ac15
ca4b2d833bff5a29b6698162423e2e5cf82e008671835e01bb941,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716927770428301949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4386d3c845bcc94595a3690ec06fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 7722ad34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d2d00755e2d2aa085fc127e00bff5cae198357367f1612c5a00654687283113,PodSandboxId:0881e09a01ed0effac1090fa172d7d93d343d9b37961684879dfd734a6
0797c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716927770398708161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f11cf3c88298cfc595782890de812176,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a2635bc2ea98051fa0060e6982cdbc878403ecd774b69a9700979890c020db,PodSandboxId:780d40c6d09cdb0796a0b522be0886c63e7d6f19d831fdbb49f94404e4680173,Metadata:&
ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716927770369120842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ce70e57fa105e011ca6bdbe769de6c,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ea69bdbd09c4bf83134ec7a0266e2b89e83fba1e72332118c62196e4802c2fd,PodSandboxId:9c890007588d6da51a68d5745609454c364d5ae51919f8cc5cc222a0de66a20e,
Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716927770361963110,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9329d3d3dd989369304d748209ebae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6f4bb928,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab781e37-d8e7-48a3-bc48-cbbd25c94220 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	44e852d22ba18       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   bf42055627c5f       hello-world-app-86c47465fc-rrlcz
	b25b089b47c42       docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00                              2 minutes ago       Running             nginx                     0                   0763ad6c0dc6d       nginx
	4b252fd37aa9f       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                        2 minutes ago       Running             headlamp                  0                   cf610ed316048       headlamp-68456f997b-jtz8c
	ea96afa17ea52       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   cbafe4ece952b       gcp-auth-5db96cd9b4-9zg48
	4befefa5042be       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             3 minutes ago       Exited              patch                     1                   3cb404e14430a       ingress-nginx-admission-patch-4l9pr
	f3ed0d6afef33       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   72036fc504f19       ingress-nginx-admission-create-xjswd
	5f8b8e68eeb82       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   83d262bedf4d9       yakd-dashboard-5ddbf7d777-wpxcg
	355a251e3cf31       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        3 minutes ago       Running             metrics-server            0                   adc08090883c6       metrics-server-c59844bb4-wjvkg
	5c00a4fc421b3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   9c0d3246c2192       storage-provisioner
	b5fc52623b436       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   7c5c0b887193c       coredns-7db6d8ff4d-hmjmn
	ee29a48aa6226       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                             4 minutes ago       Running             kube-proxy                0                   2495266d97bb1       kube-proxy-zm9r7
	1a3a1afe43af2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   fc0fa40ac15ca       etcd-addons-307023
	4d2d00755e2d2       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                             4 minutes ago       Running             kube-scheduler            0                   0881e09a01ed0       kube-scheduler-addons-307023
	56a2635bc2ea9       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                             4 minutes ago       Running             kube-controller-manager   0                   780d40c6d09cd       kube-controller-manager-addons-307023
	8ea69bdbd09c4       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                             4 minutes ago       Running             kube-apiserver            0                   9c890007588d6       kube-apiserver-addons-307023
	
	
	==> coredns [b5fc52623b4367389165518a9a3ad68d3a6dfd0bea62fee6e8dd4c68333d1d46] <==
	[INFO] 10.244.0.7:40244 - 576 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000146022s
	[INFO] 10.244.0.7:50777 - 57115 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000134875s
	[INFO] 10.244.0.7:50777 - 10521 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124428s
	[INFO] 10.244.0.7:40623 - 23829 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097082s
	[INFO] 10.244.0.7:40623 - 29719 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099101s
	[INFO] 10.244.0.7:57130 - 44038 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138694s
	[INFO] 10.244.0.7:57130 - 62980 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000124673s
	[INFO] 10.244.0.7:44285 - 57575 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000071153s
	[INFO] 10.244.0.7:44285 - 46817 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000073486s
	[INFO] 10.244.0.7:49042 - 12883 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061762s
	[INFO] 10.244.0.7:49042 - 13133 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067265s
	[INFO] 10.244.0.7:48451 - 44549 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046643s
	[INFO] 10.244.0.7:48451 - 32519 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049279s
	[INFO] 10.244.0.7:36430 - 27828 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000038772s
	[INFO] 10.244.0.7:36430 - 2230 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000068683s
	[INFO] 10.244.0.22:54672 - 6661 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000678621s
	[INFO] 10.244.0.22:44241 - 10797 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000466562s
	[INFO] 10.244.0.22:39358 - 29302 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107879s
	[INFO] 10.244.0.22:57973 - 60005 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106986s
	[INFO] 10.244.0.22:58356 - 10500 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000107302s
	[INFO] 10.244.0.22:49176 - 36376 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102899s
	[INFO] 10.244.0.22:54393 - 32442 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000602725s
	[INFO] 10.244.0.22:46509 - 35204 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000434545s
	[INFO] 10.244.0.25:59008 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000157994s
	[INFO] 10.244.0.25:52385 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000085538s
	
	
	==> describe nodes <==
	Name:               addons-307023
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-307023
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=addons-307023
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T20_22_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-307023
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:22:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-307023
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:27:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:25:59 +0000   Tue, 28 May 2024 20:22:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:25:59 +0000   Tue, 28 May 2024 20:22:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:25:59 +0000   Tue, 28 May 2024 20:22:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:25:59 +0000   Tue, 28 May 2024 20:22:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    addons-307023
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6c6949e5e2a4c13b4bc7ebf3ad315cb
	  System UUID:                e6c6949e-5e2a-4c13-b4bc-7ebf3ad315cb
	  Boot ID:                    166e0ee6-5851-451e-b967-057317e752a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-rrlcz         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-5db96cd9b4-9zg48                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  headlamp                    headlamp-68456f997b-jtz8c                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                 coredns-7db6d8ff4d-hmjmn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m38s
	  kube-system                 etcd-addons-307023                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-apiserver-addons-307023             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-controller-manager-addons-307023    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-zm9r7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-scheduler-addons-307023             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 metrics-server-c59844bb4-wjvkg           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m32s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-wpxcg          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m36s  kube-proxy       
	  Normal  Starting                 4m53s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m53s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m53s  kubelet          Node addons-307023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m53s  kubelet          Node addons-307023 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m53s  kubelet          Node addons-307023 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m52s  kubelet          Node addons-307023 status is now: NodeReady
	  Normal  RegisteredNode           4m39s  node-controller  Node addons-307023 event: Registered Node addons-307023 in Controller
	
	
	==> dmesg <==
	[  +0.076472] kauditd_printk_skb: 69 callbacks suppressed
	[May28 20:23] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.244378] systemd-fstab-generator[1506]: Ignoring "noauto" option for root device
	[  +5.176616] kauditd_printk_skb: 117 callbacks suppressed
	[  +5.167531] kauditd_printk_skb: 109 callbacks suppressed
	[  +6.683851] kauditd_printk_skb: 98 callbacks suppressed
	[ +20.471985] kauditd_printk_skb: 2 callbacks suppressed
	[May28 20:24] kauditd_printk_skb: 25 callbacks suppressed
	[ +11.608759] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.399711] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.015499] kauditd_printk_skb: 109 callbacks suppressed
	[  +6.479386] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.691471] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.222895] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.725465] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.465413] kauditd_printk_skb: 25 callbacks suppressed
	[May28 20:25] kauditd_printk_skb: 65 callbacks suppressed
	[  +6.060728] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.536821] kauditd_printk_skb: 39 callbacks suppressed
	[ +24.679775] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.319226] kauditd_printk_skb: 3 callbacks suppressed
	[May28 20:26] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.378262] kauditd_printk_skb: 33 callbacks suppressed
	[May28 20:27] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.906639] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [1a3a1afe43af2cdcd45e85e5e47b2e24cc33c6f39bd25773f3d79aa3320f7d35] <==
	{"level":"info","ts":"2024-05-28T20:24:14.248901Z","caller":"traceutil/trace.go:171","msg":"trace[151110983] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:972; }","duration":"230.781876ms","start":"2024-05-28T20:24:14.018111Z","end":"2024-05-28T20:24:14.248893Z","steps":["trace[151110983] 'agreement among raft nodes before linearized reading'  (duration: 230.515632ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:24:14.249066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.483181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T20:24:14.249103Z","caller":"traceutil/trace.go:171","msg":"trace[461391185] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:972; }","duration":"101.520706ms","start":"2024-05-28T20:24:14.147577Z","end":"2024-05-28T20:24:14.249097Z","steps":["trace[461391185] 'agreement among raft nodes before linearized reading'  (duration: 101.470851ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:24:30.581274Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.545956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-05-28T20:24:30.581319Z","caller":"traceutil/trace.go:171","msg":"trace[1614791197] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1113; }","duration":"300.618747ms","start":"2024-05-28T20:24:30.28069Z","end":"2024-05-28T20:24:30.581308Z","steps":["trace[1614791197] 'range keys from in-memory index tree'  (duration: 300.301105ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:24:30.581345Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T20:24:30.280675Z","time spent":"300.665386ms","remote":"127.0.0.1:40700","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-05-28T20:24:30.581496Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"294.667057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-05-28T20:24:30.581513Z","caller":"traceutil/trace.go:171","msg":"trace[1350744861] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1113; }","duration":"294.762269ms","start":"2024-05-28T20:24:30.286746Z","end":"2024-05-28T20:24:30.581508Z","steps":["trace[1350744861] 'range keys from in-memory index tree'  (duration: 294.588874ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T20:24:44.880672Z","caller":"traceutil/trace.go:171","msg":"trace[453788130] linearizableReadLoop","detail":"{readStateIndex:1254; appliedIndex:1253; }","duration":"193.155784ms","start":"2024-05-28T20:24:44.687503Z","end":"2024-05-28T20:24:44.880659Z","steps":["trace[453788130] 'read index received'  (duration: 191.852017ms)","trace[453788130] 'applied index is now lower than readState.Index'  (duration: 1.303212ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T20:24:44.881009Z","caller":"traceutil/trace.go:171","msg":"trace[1046907904] transaction","detail":"{read_only:false; response_revision:1215; number_of_response:1; }","duration":"217.485065ms","start":"2024-05-28T20:24:44.663513Z","end":"2024-05-28T20:24:44.880998Z","steps":["trace[1046907904] 'process raft request'  (duration: 217.057326ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:24:44.881212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.690887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-307023\" ","response":"range_response_count:1 size:9252"}
	{"level":"info","ts":"2024-05-28T20:24:44.881261Z","caller":"traceutil/trace.go:171","msg":"trace[921117727] range","detail":"{range_begin:/registry/minions/addons-307023; range_end:; response_count:1; response_revision:1215; }","duration":"193.773621ms","start":"2024-05-28T20:24:44.68748Z","end":"2024-05-28T20:24:44.881253Z","steps":["trace[921117727] 'agreement among raft nodes before linearized reading'  (duration: 193.658039ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T20:24:46.943387Z","caller":"traceutil/trace.go:171","msg":"trace[92720894] transaction","detail":"{read_only:false; response_revision:1223; number_of_response:1; }","duration":"103.478548ms","start":"2024-05-28T20:24:46.839891Z","end":"2024-05-28T20:24:46.943369Z","steps":["trace[92720894] 'process raft request'  (duration: 102.797236ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T20:24:49.164973Z","caller":"traceutil/trace.go:171","msg":"trace[2142571499] linearizableReadLoop","detail":"{readStateIndex:1267; appliedIndex:1266; }","duration":"193.958504ms","start":"2024-05-28T20:24:48.971Z","end":"2024-05-28T20:24:49.164959Z","steps":["trace[2142571499] 'read index received'  (duration: 193.748139ms)","trace[2142571499] 'applied index is now lower than readState.Index'  (duration: 209.933µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T20:24:49.165227Z","caller":"traceutil/trace.go:171","msg":"trace[1358655908] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"338.135244ms","start":"2024-05-28T20:24:48.827081Z","end":"2024-05-28T20:24:49.165216Z","steps":["trace[1358655908] 'process raft request'  (duration: 337.789491ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:24:49.165335Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T20:24:48.827065Z","time spent":"338.210444ms","remote":"127.0.0.1:40784","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-307023\" mod_revision:1159 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-307023\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-307023\" > >"}
	{"level":"warn","ts":"2024-05-28T20:24:49.165615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.633793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-05-28T20:24:49.165638Z","caller":"traceutil/trace.go:171","msg":"trace[474094034] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1228; }","duration":"194.677236ms","start":"2024-05-28T20:24:48.970954Z","end":"2024-05-28T20:24:49.165631Z","steps":["trace[474094034] 'agreement among raft nodes before linearized reading'  (duration: 194.611001ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:25:13.268879Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.925572ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7210984011906190305 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/storageclasses/local-path\" mod_revision:1432 > success:<request_delete_range:<key:\"/registry/storageclasses/local-path\" > > failure:<request_range:<key:\"/registry/storageclasses/local-path\" > >>","response":"size:18"}
	{"level":"info","ts":"2024-05-28T20:25:13.269033Z","caller":"traceutil/trace.go:171","msg":"trace[2044234968] linearizableReadLoop","detail":"{readStateIndex:1489; appliedIndex:1488; }","duration":"189.096349ms","start":"2024-05-28T20:25:13.079927Z","end":"2024-05-28T20:25:13.269023Z","steps":["trace[2044234968] 'read index received'  (duration: 18.578425ms)","trace[2044234968] 'applied index is now lower than readState.Index'  (duration: 170.516707ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T20:25:13.269241Z","caller":"traceutil/trace.go:171","msg":"trace[2040328284] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1439; }","duration":"203.206219ms","start":"2024-05-28T20:25:13.066024Z","end":"2024-05-28T20:25:13.269231Z","steps":["trace[2040328284] 'process raft request'  (duration: 32.48907ms)","trace[2040328284] 'compare'  (duration: 169.547651ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T20:25:13.269481Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.550248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/local-path-storage/local-path-provisioner-8d985888d\" ","response":"range_response_count:1 size:2711"}
	{"level":"info","ts":"2024-05-28T20:25:13.269527Z","caller":"traceutil/trace.go:171","msg":"trace[781804698] range","detail":"{range_begin:/registry/replicasets/local-path-storage/local-path-provisioner-8d985888d; range_end:; response_count:1; response_revision:1439; }","duration":"189.616201ms","start":"2024-05-28T20:25:13.079904Z","end":"2024-05-28T20:25:13.26952Z","steps":["trace[781804698] 'agreement among raft nodes before linearized reading'  (duration: 189.515402ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:25:13.269608Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.691594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T20:25:13.269639Z","caller":"traceutil/trace.go:171","msg":"trace[223807614] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1439; }","duration":"121.744922ms","start":"2024-05-28T20:25:13.147889Z","end":"2024-05-28T20:25:13.269634Z","steps":["trace[223807614] 'agreement among raft nodes before linearized reading'  (duration: 121.701636ms)"],"step_count":1}
	
	
	==> gcp-auth [ea96afa17ea5229968ae5126d5b87ebacc473587a5b9b920907f3b088d2db71b] <==
	2024/05/28 20:24:38 GCP Auth Webhook started!
	2024/05/28 20:24:43 Ready to marshal response ...
	2024/05/28 20:24:43 Ready to write response ...
	2024/05/28 20:24:43 Ready to marshal response ...
	2024/05/28 20:24:43 Ready to write response ...
	2024/05/28 20:24:43 Ready to marshal response ...
	2024/05/28 20:24:43 Ready to write response ...
	2024/05/28 20:24:53 Ready to marshal response ...
	2024/05/28 20:24:53 Ready to write response ...
	2024/05/28 20:24:53 Ready to marshal response ...
	2024/05/28 20:24:53 Ready to write response ...
	2024/05/28 20:24:59 Ready to marshal response ...
	2024/05/28 20:24:59 Ready to write response ...
	2024/05/28 20:25:00 Ready to marshal response ...
	2024/05/28 20:25:00 Ready to write response ...
	2024/05/28 20:25:12 Ready to marshal response ...
	2024/05/28 20:25:12 Ready to write response ...
	2024/05/28 20:25:14 Ready to marshal response ...
	2024/05/28 20:25:14 Ready to write response ...
	2024/05/28 20:25:37 Ready to marshal response ...
	2024/05/28 20:25:37 Ready to write response ...
	2024/05/28 20:26:05 Ready to marshal response ...
	2024/05/28 20:26:05 Ready to write response ...
	2024/05/28 20:27:37 Ready to marshal response ...
	2024/05/28 20:27:37 Ready to write response ...
	
	
	==> kernel <==
	 20:27:48 up 5 min,  0 users,  load average: 0.95, 1.10, 0.56
	Linux addons-307023 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8ea69bdbd09c4bf83134ec7a0266e2b89e83fba1e72332118c62196e4802c2fd] <==
	E0528 20:25:11.818054       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0528 20:25:11.818285       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.58.136:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.58.136:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0528 20:25:11.837591       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0528 20:25:11.847285       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0528 20:25:14.115699       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0528 20:25:14.304175       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.143.122"}
	E0528 20:25:28.437170       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0528 20:25:53.605341       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0528 20:26:22.529582       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 20:26:22.530068       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 20:26:22.588591       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 20:26:22.588648       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 20:26:22.598301       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 20:26:22.598383       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 20:26:22.603143       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 20:26:22.603197       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 20:26:22.639133       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 20:26:22.639189       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0528 20:26:23.599019       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0528 20:26:23.639427       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0528 20:26:23.648196       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0528 20:27:37.509945       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.139.198"}
	E0528 20:27:40.523917       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0528 20:27:42.967111       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [56a2635bc2ea98051fa0060e6982cdbc878403ecd774b69a9700979890c020db] <==
	E0528 20:26:40.257151       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:26:54.108974       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:26:54.109070       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:26:55.390721       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:26:55.390820       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:27:03.920001       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:27:03.920032       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:27:13.726669       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:27:13.726703       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:27:35.489653       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:27:35.489710       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 20:27:37.364123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="51.123049ms"
	I0528 20:27:37.377433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="13.221003ms"
	I0528 20:27:37.378054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="129.587µs"
	W0528 20:27:38.323197       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:27:38.323233       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 20:27:40.315393       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0528 20:27:40.320574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="7.028µs"
	I0528 20:27:40.328228       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0528 20:27:41.401395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="14.832145ms"
	I0528 20:27:41.401536       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="31.889µs"
	W0528 20:27:43.857653       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:27:43.857707       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:27:45.035453       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:27:45.035525       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [ee29a48aa62269f9e1e3af0e1a3bcef0cc8ba7014553ab82c93aaf128c2b3c2d] <==
	I0528 20:23:12.124248       1 server_linux.go:69] "Using iptables proxy"
	I0528 20:23:12.139669       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	I0528 20:23:12.237210       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 20:23:12.237256       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 20:23:12.237271       1 server_linux.go:165] "Using iptables Proxier"
	I0528 20:23:12.244041       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 20:23:12.244227       1 server.go:872] "Version info" version="v1.30.1"
	I0528 20:23:12.244250       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:23:12.252316       1 config.go:192] "Starting service config controller"
	I0528 20:23:12.252351       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 20:23:12.252376       1 config.go:101] "Starting endpoint slice config controller"
	I0528 20:23:12.252380       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 20:23:12.252699       1 config.go:319] "Starting node config controller"
	I0528 20:23:12.256929       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 20:23:12.353183       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 20:23:12.353230       1 shared_informer.go:320] Caches are synced for service config
	I0528 20:23:12.357178       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4d2d00755e2d2aa085fc127e00bff5cae198357367f1612c5a00654687283113] <==
	W0528 20:22:53.718607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0528 20:22:53.718659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0528 20:22:53.747926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 20:22:53.747975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0528 20:22:53.807729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 20:22:53.807888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0528 20:22:53.872691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 20:22:53.872740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 20:22:53.914964       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 20:22:53.915129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0528 20:22:53.928671       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 20:22:53.929117       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 20:22:53.930030       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 20:22:53.930305       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0528 20:22:53.937867       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0528 20:22:53.938019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0528 20:22:53.956150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 20:22:53.956267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 20:22:53.968968       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 20:22:53.969011       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 20:22:54.221073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0528 20:22:54.221123       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0528 20:22:54.240740       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 20:22:54.240883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0528 20:22:56.784464       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 20:27:37 addons-307023 kubelet[1273]: I0528 20:27:37.348854    1273 memory_manager.go:354] "RemoveStaleState removing state" podUID="5595c3ca-5a1c-4c3c-9647-413836e28765" containerName="csi-snapshotter"
	May 28 20:27:37 addons-307023 kubelet[1273]: I0528 20:27:37.401302    1273 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm9dr\" (UniqueName: \"kubernetes.io/projected/64e12ad3-6c87-49f8-b726-fe0fbe89ae4a-kube-api-access-pm9dr\") pod \"hello-world-app-86c47465fc-rrlcz\" (UID: \"64e12ad3-6c87-49f8-b726-fe0fbe89ae4a\") " pod="default/hello-world-app-86c47465fc-rrlcz"
	May 28 20:27:37 addons-307023 kubelet[1273]: I0528 20:27:37.401357    1273 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/64e12ad3-6c87-49f8-b726-fe0fbe89ae4a-gcp-creds\") pod \"hello-world-app-86c47465fc-rrlcz\" (UID: \"64e12ad3-6c87-49f8-b726-fe0fbe89ae4a\") " pod="default/hello-world-app-86c47465fc-rrlcz"
	May 28 20:27:38 addons-307023 kubelet[1273]: I0528 20:27:38.408940    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcgqt\" (UniqueName: \"kubernetes.io/projected/1f7b4e7c-b982-4c04-add7-525795548760-kube-api-access-tcgqt\") pod \"1f7b4e7c-b982-4c04-add7-525795548760\" (UID: \"1f7b4e7c-b982-4c04-add7-525795548760\") "
	May 28 20:27:38 addons-307023 kubelet[1273]: I0528 20:27:38.411085    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7b4e7c-b982-4c04-add7-525795548760-kube-api-access-tcgqt" (OuterVolumeSpecName: "kube-api-access-tcgqt") pod "1f7b4e7c-b982-4c04-add7-525795548760" (UID: "1f7b4e7c-b982-4c04-add7-525795548760"). InnerVolumeSpecName "kube-api-access-tcgqt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 28 20:27:38 addons-307023 kubelet[1273]: I0528 20:27:38.509932    1273 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tcgqt\" (UniqueName: \"kubernetes.io/projected/1f7b4e7c-b982-4c04-add7-525795548760-kube-api-access-tcgqt\") on node \"addons-307023\" DevicePath \"\""
	May 28 20:27:39 addons-307023 kubelet[1273]: I0528 20:27:39.342223    1273 scope.go:117] "RemoveContainer" containerID="0e979012c4140e29ffa27cd235b165ab15c775f7179922b3d3aeaf76379db2e9"
	May 28 20:27:39 addons-307023 kubelet[1273]: I0528 20:27:39.369895    1273 scope.go:117] "RemoveContainer" containerID="0e979012c4140e29ffa27cd235b165ab15c775f7179922b3d3aeaf76379db2e9"
	May 28 20:27:39 addons-307023 kubelet[1273]: E0528 20:27:39.379078    1273 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e979012c4140e29ffa27cd235b165ab15c775f7179922b3d3aeaf76379db2e9\": container with ID starting with 0e979012c4140e29ffa27cd235b165ab15c775f7179922b3d3aeaf76379db2e9 not found: ID does not exist" containerID="0e979012c4140e29ffa27cd235b165ab15c775f7179922b3d3aeaf76379db2e9"
	May 28 20:27:39 addons-307023 kubelet[1273]: I0528 20:27:39.379129    1273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e979012c4140e29ffa27cd235b165ab15c775f7179922b3d3aeaf76379db2e9"} err="failed to get container status \"0e979012c4140e29ffa27cd235b165ab15c775f7179922b3d3aeaf76379db2e9\": rpc error: code = NotFound desc = could not find container \"0e979012c4140e29ffa27cd235b165ab15c775f7179922b3d3aeaf76379db2e9\": container with ID starting with 0e979012c4140e29ffa27cd235b165ab15c775f7179922b3d3aeaf76379db2e9 not found: ID does not exist"
	May 28 20:27:39 addons-307023 kubelet[1273]: I0528 20:27:39.454246    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f7b4e7c-b982-4c04-add7-525795548760" path="/var/lib/kubelet/pods/1f7b4e7c-b982-4c04-add7-525795548760/volumes"
	May 28 20:27:41 addons-307023 kubelet[1273]: I0528 20:27:41.390550    1273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-rrlcz" podStartSLOduration=1.140217338 podStartE2EDuration="4.390499211s" podCreationTimestamp="2024-05-28 20:27:37 +0000 UTC" firstStartedPulling="2024-05-28 20:27:37.882940463 +0000 UTC m=+282.564040674" lastFinishedPulling="2024-05-28 20:27:41.133222324 +0000 UTC m=+285.814322547" observedRunningTime="2024-05-28 20:27:41.388976167 +0000 UTC m=+286.070076399" watchObservedRunningTime="2024-05-28 20:27:41.390499211 +0000 UTC m=+286.071599442"
	May 28 20:27:41 addons-307023 kubelet[1273]: I0528 20:27:41.450013    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aee838f-cff6-4b4e-9b37-08bd8d3cdbc2" path="/var/lib/kubelet/pods/4aee838f-cff6-4b4e-9b37-08bd8d3cdbc2/volumes"
	May 28 20:27:41 addons-307023 kubelet[1273]: I0528 20:27:41.450396    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d7a71a-f958-4ea6-851a-6268d18ef3c7" path="/var/lib/kubelet/pods/d9d7a71a-f958-4ea6-851a-6268d18ef3c7/volumes"
	May 28 20:27:43 addons-307023 kubelet[1273]: I0528 20:27:43.650142    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj8mm\" (UniqueName: \"kubernetes.io/projected/167a9905-80e8-4cc4-810b-4126763d9076-kube-api-access-gj8mm\") pod \"167a9905-80e8-4cc4-810b-4126763d9076\" (UID: \"167a9905-80e8-4cc4-810b-4126763d9076\") "
	May 28 20:27:43 addons-307023 kubelet[1273]: I0528 20:27:43.650190    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/167a9905-80e8-4cc4-810b-4126763d9076-webhook-cert\") pod \"167a9905-80e8-4cc4-810b-4126763d9076\" (UID: \"167a9905-80e8-4cc4-810b-4126763d9076\") "
	May 28 20:27:43 addons-307023 kubelet[1273]: I0528 20:27:43.653454    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/167a9905-80e8-4cc4-810b-4126763d9076-kube-api-access-gj8mm" (OuterVolumeSpecName: "kube-api-access-gj8mm") pod "167a9905-80e8-4cc4-810b-4126763d9076" (UID: "167a9905-80e8-4cc4-810b-4126763d9076"). InnerVolumeSpecName "kube-api-access-gj8mm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 28 20:27:43 addons-307023 kubelet[1273]: I0528 20:27:43.654116    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/167a9905-80e8-4cc4-810b-4126763d9076-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "167a9905-80e8-4cc4-810b-4126763d9076" (UID: "167a9905-80e8-4cc4-810b-4126763d9076"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 28 20:27:43 addons-307023 kubelet[1273]: I0528 20:27:43.751334    1273 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/167a9905-80e8-4cc4-810b-4126763d9076-webhook-cert\") on node \"addons-307023\" DevicePath \"\""
	May 28 20:27:43 addons-307023 kubelet[1273]: I0528 20:27:43.751363    1273 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gj8mm\" (UniqueName: \"kubernetes.io/projected/167a9905-80e8-4cc4-810b-4126763d9076-kube-api-access-gj8mm\") on node \"addons-307023\" DevicePath \"\""
	May 28 20:27:44 addons-307023 kubelet[1273]: I0528 20:27:44.388536    1273 scope.go:117] "RemoveContainer" containerID="88c8f95dc69d4fc57aad3d07eefc4fc06486321885b72da35268a9fa5f586ae3"
	May 28 20:27:44 addons-307023 kubelet[1273]: I0528 20:27:44.407992    1273 scope.go:117] "RemoveContainer" containerID="88c8f95dc69d4fc57aad3d07eefc4fc06486321885b72da35268a9fa5f586ae3"
	May 28 20:27:44 addons-307023 kubelet[1273]: E0528 20:27:44.408347    1273 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c8f95dc69d4fc57aad3d07eefc4fc06486321885b72da35268a9fa5f586ae3\": container with ID starting with 88c8f95dc69d4fc57aad3d07eefc4fc06486321885b72da35268a9fa5f586ae3 not found: ID does not exist" containerID="88c8f95dc69d4fc57aad3d07eefc4fc06486321885b72da35268a9fa5f586ae3"
	May 28 20:27:44 addons-307023 kubelet[1273]: I0528 20:27:44.408376    1273 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c8f95dc69d4fc57aad3d07eefc4fc06486321885b72da35268a9fa5f586ae3"} err="failed to get container status \"88c8f95dc69d4fc57aad3d07eefc4fc06486321885b72da35268a9fa5f586ae3\": rpc error: code = NotFound desc = could not find container \"88c8f95dc69d4fc57aad3d07eefc4fc06486321885b72da35268a9fa5f586ae3\": container with ID starting with 88c8f95dc69d4fc57aad3d07eefc4fc06486321885b72da35268a9fa5f586ae3 not found: ID does not exist"
	May 28 20:27:45 addons-307023 kubelet[1273]: I0528 20:27:45.449614    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="167a9905-80e8-4cc4-810b-4126763d9076" path="/var/lib/kubelet/pods/167a9905-80e8-4cc4-810b-4126763d9076/volumes"
	
	
	==> storage-provisioner [5c00a4fc421b3f7ba35f835a5ffb3586ed187a922a41a09e7d62310f40f28b4a] <==
	I0528 20:23:17.851728       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 20:23:17.865856       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 20:23:17.865943       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 20:23:17.886930       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 20:23:17.887155       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-307023_0ffd0080-ee1e-4fd4-b08c-662145dfa312!
	I0528 20:23:17.887520       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27e55524-0954-481a-a161-595387a48ad7", APIVersion:"v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-307023_0ffd0080-ee1e-4fd4-b08c-662145dfa312 became leader
	I0528 20:23:17.987959       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-307023_0ffd0080-ee1e-4fd4-b08c-662145dfa312!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-307023 -n addons-307023
helpers_test.go:261: (dbg) Run:  kubectl --context addons-307023 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.51s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (340.58s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.353632ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-wjvkg" [a9aa82de-329c-4c74-bdc0-f304386c8ede] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
helpers_test.go:344: "metrics-server-c59844bb4-wjvkg" [a9aa82de-329c-4c74-bdc0-f304386c8ede] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004734998s
addons_test.go:417: (dbg) Run:  kubectl --context addons-307023 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-307023 top pods -n kube-system: exit status 1 (56.900233ms)

                                                
                                                
** stderr ** 
	error: Metrics API not available

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-307023 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-307023 top pods -n kube-system: exit status 1 (75.995263ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmjmn, age: 2m2.480116695s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-307023 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-307023 top pods -n kube-system: exit status 1 (82.088712ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmjmn, age: 2m7.609337473s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-307023 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-307023 top pods -n kube-system: exit status 1 (62.654369ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmjmn, age: 2m14.697660974s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-307023 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-307023 top pods -n kube-system: exit status 1 (67.570038ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmjmn, age: 2m25.170761127s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-307023 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-307023 top pods -n kube-system: exit status 1 (66.200857ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmjmn, age: 2m47.305843137s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-307023 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-307023 top pods -n kube-system: exit status 1 (63.194376ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmjmn, age: 3m5.775197065s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-307023 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-307023 top pods -n kube-system: exit status 1 (63.222345ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmjmn, age: 3m32.960283089s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-307023 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-307023 top pods -n kube-system: exit status 1 (60.739834ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmjmn, age: 4m15.096435498s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-307023 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-307023 top pods -n kube-system: exit status 1 (62.077687ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmjmn, age: 5m36.724974884s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-307023 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-307023 top pods -n kube-system: exit status 1 (60.06609ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmjmn, age: 6m17.245037754s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-307023 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-307023 top pods -n kube-system: exit status 1 (63.528499ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-hmjmn, age: 7m30.92426107s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-307023 -n addons-307023
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-307023 logs -n 25: (1.366619728s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:22 UTC |
	| delete  | -p download-only-984992                                                                     | download-only-984992 | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:22 UTC |
	| delete  | -p download-only-610519                                                                     | download-only-610519 | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:22 UTC |
	| delete  | -p download-only-984992                                                                     | download-only-984992 | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:22 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-408531 | jenkins | v1.33.1 | 28 May 24 20:22 UTC |                     |
	|         | binary-mirror-408531                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38549                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-408531                                                                     | binary-mirror-408531 | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:22 UTC |
	| addons  | enable dashboard -p                                                                         | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:22 UTC |                     |
	|         | addons-307023                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:22 UTC |                     |
	|         | addons-307023                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-307023 --wait=true                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:24 UTC | 28 May 24 20:24 UTC |
	|         | -p addons-307023                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:24 UTC | 28 May 24 20:24 UTC |
	|         | -p addons-307023                                                                            |                      |         |         |                     |                     |
	| addons  | addons-307023 addons disable                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:24 UTC | 28 May 24 20:24 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC | 28 May 24 20:25 UTC |
	|         | addons-307023                                                                               |                      |         |         |                     |                     |
	| ip      | addons-307023 ip                                                                            | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC | 28 May 24 20:25 UTC |
	| addons  | addons-307023 addons disable                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC | 28 May 24 20:25 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC | 28 May 24 20:25 UTC |
	|         | addons-307023                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-307023 ssh cat                                                                       | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC | 28 May 24 20:25 UTC |
	|         | /opt/local-path-provisioner/pvc-ea111a43-617c-4baa-a9fd-5cb0ed5a97d7_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-307023 addons disable                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC | 28 May 24 20:25 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-307023 ssh curl -s                                                                   | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:25 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-307023 addons                                                                        | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:26 UTC | 28 May 24 20:26 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-307023 addons                                                                        | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:26 UTC | 28 May 24 20:26 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-307023 ip                                                                            | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:27 UTC | 28 May 24 20:27 UTC |
	| addons  | addons-307023 addons disable                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:27 UTC | 28 May 24 20:27 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-307023 addons disable                                                                | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:27 UTC | 28 May 24 20:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-307023 addons                                                                        | addons-307023        | jenkins | v1.33.1 | 28 May 24 20:30 UTC | 28 May 24 20:30 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 20:22:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 20:22:16.299687   12512 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:22:16.299944   12512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:22:16.299957   12512 out.go:304] Setting ErrFile to fd 2...
	I0528 20:22:16.299961   12512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:22:16.300139   12512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:22:16.300709   12512 out.go:298] Setting JSON to false
	I0528 20:22:16.301564   12512 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":279,"bootTime":1716927457,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 20:22:16.301624   12512 start.go:139] virtualization: kvm guest
	I0528 20:22:16.303632   12512 out.go:177] * [addons-307023] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 20:22:16.304886   12512 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 20:22:16.306161   12512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 20:22:16.304858   12512 notify.go:220] Checking for updates...
	I0528 20:22:16.308339   12512 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:22:16.309433   12512 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:22:16.310627   12512 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 20:22:16.311976   12512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 20:22:16.313300   12512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 20:22:16.344364   12512 out.go:177] * Using the kvm2 driver based on user configuration
	I0528 20:22:16.345582   12512 start.go:297] selected driver: kvm2
	I0528 20:22:16.345593   12512 start.go:901] validating driver "kvm2" against <nil>
	I0528 20:22:16.345605   12512 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 20:22:16.346303   12512 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:22:16.346375   12512 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 20:22:16.360089   12512 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 20:22:16.360134   12512 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 20:22:16.360341   12512 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:22:16.360406   12512 cni.go:84] Creating CNI manager for ""
	I0528 20:22:16.360422   12512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 20:22:16.360433   12512 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0528 20:22:16.360490   12512 start.go:340] cluster config:
	{Name:addons-307023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-307023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:22:16.360586   12512 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:22:16.362213   12512 out.go:177] * Starting "addons-307023" primary control-plane node in "addons-307023" cluster
	I0528 20:22:16.363256   12512 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:22:16.363286   12512 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 20:22:16.363292   12512 cache.go:56] Caching tarball of preloaded images
	I0528 20:22:16.363365   12512 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 20:22:16.363376   12512 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 20:22:16.363642   12512 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/config.json ...
	I0528 20:22:16.363659   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/config.json: {Name:mk9bcf9f72796568cf263ac6c092a3172b864dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:16.363774   12512 start.go:360] acquireMachinesLock for addons-307023: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 20:22:16.363816   12512 start.go:364] duration metric: took 29.975µs to acquireMachinesLock for "addons-307023"
	I0528 20:22:16.363832   12512 start.go:93] Provisioning new machine with config: &{Name:addons-307023 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-307023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:22:16.363880   12512 start.go:125] createHost starting for "" (driver="kvm2")
	I0528 20:22:16.365325   12512 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0528 20:22:16.365460   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:22:16.365501   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:22:16.379003   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
	I0528 20:22:16.379390   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:22:16.379885   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:22:16.379912   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:22:16.380227   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:22:16.380389   12512 main.go:141] libmachine: (addons-307023) Calling .GetMachineName
	I0528 20:22:16.380527   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:16.380651   12512 start.go:159] libmachine.API.Create for "addons-307023" (driver="kvm2")
	I0528 20:22:16.380692   12512 client.go:168] LocalClient.Create starting
	I0528 20:22:16.380737   12512 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem
	I0528 20:22:16.644996   12512 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem
	I0528 20:22:16.858993   12512 main.go:141] libmachine: Running pre-create checks...
	I0528 20:22:16.859017   12512 main.go:141] libmachine: (addons-307023) Calling .PreCreateCheck
	I0528 20:22:16.859498   12512 main.go:141] libmachine: (addons-307023) Calling .GetConfigRaw
	I0528 20:22:16.859873   12512 main.go:141] libmachine: Creating machine...
	I0528 20:22:16.859886   12512 main.go:141] libmachine: (addons-307023) Calling .Create
	I0528 20:22:16.860034   12512 main.go:141] libmachine: (addons-307023) Creating KVM machine...
	I0528 20:22:16.861252   12512 main.go:141] libmachine: (addons-307023) DBG | found existing default KVM network
	I0528 20:22:16.862021   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:16.861873   12534 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0528 20:22:16.862042   12512 main.go:141] libmachine: (addons-307023) DBG | created network xml: 
	I0528 20:22:16.862058   12512 main.go:141] libmachine: (addons-307023) DBG | <network>
	I0528 20:22:16.862071   12512 main.go:141] libmachine: (addons-307023) DBG |   <name>mk-addons-307023</name>
	I0528 20:22:16.862080   12512 main.go:141] libmachine: (addons-307023) DBG |   <dns enable='no'/>
	I0528 20:22:16.862087   12512 main.go:141] libmachine: (addons-307023) DBG |   
	I0528 20:22:16.862097   12512 main.go:141] libmachine: (addons-307023) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0528 20:22:16.862109   12512 main.go:141] libmachine: (addons-307023) DBG |     <dhcp>
	I0528 20:22:16.862151   12512 main.go:141] libmachine: (addons-307023) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0528 20:22:16.862174   12512 main.go:141] libmachine: (addons-307023) DBG |     </dhcp>
	I0528 20:22:16.862185   12512 main.go:141] libmachine: (addons-307023) DBG |   </ip>
	I0528 20:22:16.862196   12512 main.go:141] libmachine: (addons-307023) DBG |   
	I0528 20:22:16.862201   12512 main.go:141] libmachine: (addons-307023) DBG | </network>
	I0528 20:22:16.862206   12512 main.go:141] libmachine: (addons-307023) DBG | 
	I0528 20:22:16.867691   12512 main.go:141] libmachine: (addons-307023) DBG | trying to create private KVM network mk-addons-307023 192.168.39.0/24...
	I0528 20:22:16.931381   12512 main.go:141] libmachine: (addons-307023) DBG | private KVM network mk-addons-307023 192.168.39.0/24 created
	I0528 20:22:16.931415   12512 main.go:141] libmachine: (addons-307023) Setting up store path in /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023 ...
	I0528 20:22:16.931440   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:16.931343   12534 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:22:16.931487   12512 main.go:141] libmachine: (addons-307023) Building disk image from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 20:22:16.931518   12512 main.go:141] libmachine: (addons-307023) Downloading /home/jenkins/minikube-integration/18966-3963/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 20:22:17.174781   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:17.174661   12534 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa...
	I0528 20:22:17.296769   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:17.296644   12534 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/addons-307023.rawdisk...
	I0528 20:22:17.296798   12512 main.go:141] libmachine: (addons-307023) DBG | Writing magic tar header
	I0528 20:22:17.296808   12512 main.go:141] libmachine: (addons-307023) DBG | Writing SSH key tar header
	I0528 20:22:17.296820   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:17.296747   12534 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023 ...
	I0528 20:22:17.296844   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023
	I0528 20:22:17.296861   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines
	I0528 20:22:17.296871   12512 main.go:141] libmachine: (addons-307023) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023 (perms=drwx------)
	I0528 20:22:17.296882   12512 main.go:141] libmachine: (addons-307023) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines (perms=drwxr-xr-x)
	I0528 20:22:17.296930   12512 main.go:141] libmachine: (addons-307023) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube (perms=drwxr-xr-x)
	I0528 20:22:17.296974   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:22:17.296985   12512 main.go:141] libmachine: (addons-307023) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963 (perms=drwxrwxr-x)
	I0528 20:22:17.297003   12512 main.go:141] libmachine: (addons-307023) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0528 20:22:17.297013   12512 main.go:141] libmachine: (addons-307023) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0528 20:22:17.297024   12512 main.go:141] libmachine: (addons-307023) Creating domain...
	I0528 20:22:17.297039   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963
	I0528 20:22:17.297052   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0528 20:22:17.297063   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home/jenkins
	I0528 20:22:17.297074   12512 main.go:141] libmachine: (addons-307023) DBG | Checking permissions on dir: /home
	I0528 20:22:17.297082   12512 main.go:141] libmachine: (addons-307023) DBG | Skipping /home - not owner
	I0528 20:22:17.298037   12512 main.go:141] libmachine: (addons-307023) define libvirt domain using xml: 
	I0528 20:22:17.298062   12512 main.go:141] libmachine: (addons-307023) <domain type='kvm'>
	I0528 20:22:17.298075   12512 main.go:141] libmachine: (addons-307023)   <name>addons-307023</name>
	I0528 20:22:17.298085   12512 main.go:141] libmachine: (addons-307023)   <memory unit='MiB'>4000</memory>
	I0528 20:22:17.298094   12512 main.go:141] libmachine: (addons-307023)   <vcpu>2</vcpu>
	I0528 20:22:17.298101   12512 main.go:141] libmachine: (addons-307023)   <features>
	I0528 20:22:17.298114   12512 main.go:141] libmachine: (addons-307023)     <acpi/>
	I0528 20:22:17.298118   12512 main.go:141] libmachine: (addons-307023)     <apic/>
	I0528 20:22:17.298123   12512 main.go:141] libmachine: (addons-307023)     <pae/>
	I0528 20:22:17.298130   12512 main.go:141] libmachine: (addons-307023)     
	I0528 20:22:17.298135   12512 main.go:141] libmachine: (addons-307023)   </features>
	I0528 20:22:17.298145   12512 main.go:141] libmachine: (addons-307023)   <cpu mode='host-passthrough'>
	I0528 20:22:17.298156   12512 main.go:141] libmachine: (addons-307023)   
	I0528 20:22:17.298171   12512 main.go:141] libmachine: (addons-307023)   </cpu>
	I0528 20:22:17.298184   12512 main.go:141] libmachine: (addons-307023)   <os>
	I0528 20:22:17.298194   12512 main.go:141] libmachine: (addons-307023)     <type>hvm</type>
	I0528 20:22:17.298202   12512 main.go:141] libmachine: (addons-307023)     <boot dev='cdrom'/>
	I0528 20:22:17.298210   12512 main.go:141] libmachine: (addons-307023)     <boot dev='hd'/>
	I0528 20:22:17.298215   12512 main.go:141] libmachine: (addons-307023)     <bootmenu enable='no'/>
	I0528 20:22:17.298222   12512 main.go:141] libmachine: (addons-307023)   </os>
	I0528 20:22:17.298234   12512 main.go:141] libmachine: (addons-307023)   <devices>
	I0528 20:22:17.298245   12512 main.go:141] libmachine: (addons-307023)     <disk type='file' device='cdrom'>
	I0528 20:22:17.298262   12512 main.go:141] libmachine: (addons-307023)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/boot2docker.iso'/>
	I0528 20:22:17.298274   12512 main.go:141] libmachine: (addons-307023)       <target dev='hdc' bus='scsi'/>
	I0528 20:22:17.298286   12512 main.go:141] libmachine: (addons-307023)       <readonly/>
	I0528 20:22:17.298294   12512 main.go:141] libmachine: (addons-307023)     </disk>
	I0528 20:22:17.298304   12512 main.go:141] libmachine: (addons-307023)     <disk type='file' device='disk'>
	I0528 20:22:17.298321   12512 main.go:141] libmachine: (addons-307023)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0528 20:22:17.298334   12512 main.go:141] libmachine: (addons-307023)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/addons-307023.rawdisk'/>
	I0528 20:22:17.298342   12512 main.go:141] libmachine: (addons-307023)       <target dev='hda' bus='virtio'/>
	I0528 20:22:17.298350   12512 main.go:141] libmachine: (addons-307023)     </disk>
	I0528 20:22:17.298357   12512 main.go:141] libmachine: (addons-307023)     <interface type='network'>
	I0528 20:22:17.298365   12512 main.go:141] libmachine: (addons-307023)       <source network='mk-addons-307023'/>
	I0528 20:22:17.298372   12512 main.go:141] libmachine: (addons-307023)       <model type='virtio'/>
	I0528 20:22:17.298380   12512 main.go:141] libmachine: (addons-307023)     </interface>
	I0528 20:22:17.298392   12512 main.go:141] libmachine: (addons-307023)     <interface type='network'>
	I0528 20:22:17.298404   12512 main.go:141] libmachine: (addons-307023)       <source network='default'/>
	I0528 20:22:17.298412   12512 main.go:141] libmachine: (addons-307023)       <model type='virtio'/>
	I0528 20:22:17.298423   12512 main.go:141] libmachine: (addons-307023)     </interface>
	I0528 20:22:17.298432   12512 main.go:141] libmachine: (addons-307023)     <serial type='pty'>
	I0528 20:22:17.298444   12512 main.go:141] libmachine: (addons-307023)       <target port='0'/>
	I0528 20:22:17.298454   12512 main.go:141] libmachine: (addons-307023)     </serial>
	I0528 20:22:17.298473   12512 main.go:141] libmachine: (addons-307023)     <console type='pty'>
	I0528 20:22:17.298487   12512 main.go:141] libmachine: (addons-307023)       <target type='serial' port='0'/>
	I0528 20:22:17.298493   12512 main.go:141] libmachine: (addons-307023)     </console>
	I0528 20:22:17.298500   12512 main.go:141] libmachine: (addons-307023)     <rng model='virtio'>
	I0528 20:22:17.298507   12512 main.go:141] libmachine: (addons-307023)       <backend model='random'>/dev/random</backend>
	I0528 20:22:17.298514   12512 main.go:141] libmachine: (addons-307023)     </rng>
	I0528 20:22:17.298521   12512 main.go:141] libmachine: (addons-307023)     
	I0528 20:22:17.298530   12512 main.go:141] libmachine: (addons-307023)     
	I0528 20:22:17.298542   12512 main.go:141] libmachine: (addons-307023)   </devices>
	I0528 20:22:17.298552   12512 main.go:141] libmachine: (addons-307023) </domain>
	I0528 20:22:17.298562   12512 main.go:141] libmachine: (addons-307023) 
	I0528 20:22:17.304457   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:c0:73:53 in network default
	I0528 20:22:17.305064   12512 main.go:141] libmachine: (addons-307023) Ensuring networks are active...
	I0528 20:22:17.305083   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:17.305689   12512 main.go:141] libmachine: (addons-307023) Ensuring network default is active
	I0528 20:22:17.305962   12512 main.go:141] libmachine: (addons-307023) Ensuring network mk-addons-307023 is active
	I0528 20:22:17.306424   12512 main.go:141] libmachine: (addons-307023) Getting domain xml...
	I0528 20:22:17.307003   12512 main.go:141] libmachine: (addons-307023) Creating domain...
	I0528 20:22:18.666855   12512 main.go:141] libmachine: (addons-307023) Waiting to get IP...
	I0528 20:22:18.667741   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:18.668053   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:18.668103   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:18.668056   12534 retry.go:31] will retry after 254.097744ms: waiting for machine to come up
	I0528 20:22:18.923393   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:18.923770   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:18.923803   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:18.923739   12534 retry.go:31] will retry after 364.094801ms: waiting for machine to come up
	I0528 20:22:19.289187   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:19.289596   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:19.289619   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:19.289552   12534 retry.go:31] will retry after 304.027275ms: waiting for machine to come up
	I0528 20:22:19.594988   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:19.595336   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:19.595365   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:19.595282   12534 retry.go:31] will retry after 501.270308ms: waiting for machine to come up
	I0528 20:22:20.097808   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:20.098266   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:20.098293   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:20.098216   12534 retry.go:31] will retry after 460.735285ms: waiting for machine to come up
	I0528 20:22:20.560858   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:20.561409   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:20.561434   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:20.561366   12534 retry.go:31] will retry after 764.144242ms: waiting for machine to come up
	I0528 20:22:21.327164   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:21.327563   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:21.327593   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:21.327524   12534 retry.go:31] will retry after 891.559058ms: waiting for machine to come up
	I0528 20:22:22.222184   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:22.222606   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:22.222659   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:22.222564   12534 retry.go:31] will retry after 1.150241524s: waiting for machine to come up
	I0528 20:22:23.374802   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:23.375080   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:23.375100   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:23.375041   12534 retry.go:31] will retry after 1.424523439s: waiting for machine to come up
	I0528 20:22:24.801720   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:24.802188   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:24.802211   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:24.802153   12534 retry.go:31] will retry after 1.834091116s: waiting for machine to come up
	I0528 20:22:26.638045   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:26.638517   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:26.638546   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:26.638475   12534 retry.go:31] will retry after 2.55493296s: waiting for machine to come up
	I0528 20:22:29.196052   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:29.196485   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:29.196505   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:29.196438   12534 retry.go:31] will retry after 3.539361988s: waiting for machine to come up
	I0528 20:22:32.737402   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:32.737722   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find current IP address of domain addons-307023 in network mk-addons-307023
	I0528 20:22:32.737742   12512 main.go:141] libmachine: (addons-307023) DBG | I0528 20:22:32.737688   12534 retry.go:31] will retry after 4.468051148s: waiting for machine to come up
	I0528 20:22:37.206865   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.207376   12512 main.go:141] libmachine: (addons-307023) Found IP for machine: 192.168.39.230
	I0528 20:22:37.207401   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has current primary IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.207409   12512 main.go:141] libmachine: (addons-307023) Reserving static IP address...
	I0528 20:22:37.207773   12512 main.go:141] libmachine: (addons-307023) DBG | unable to find host DHCP lease matching {name: "addons-307023", mac: "52:54:00:40:c7:f9", ip: "192.168.39.230"} in network mk-addons-307023
	I0528 20:22:37.275489   12512 main.go:141] libmachine: (addons-307023) DBG | Getting to WaitForSSH function...
	I0528 20:22:37.275522   12512 main.go:141] libmachine: (addons-307023) Reserved static IP address: 192.168.39.230
	I0528 20:22:37.275538   12512 main.go:141] libmachine: (addons-307023) Waiting for SSH to be available...
	I0528 20:22:37.278120   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.278539   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:minikube Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.278567   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.278777   12512 main.go:141] libmachine: (addons-307023) DBG | Using SSH client type: external
	I0528 20:22:37.278806   12512 main.go:141] libmachine: (addons-307023) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa (-rw-------)
	I0528 20:22:37.278835   12512 main.go:141] libmachine: (addons-307023) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 20:22:37.278848   12512 main.go:141] libmachine: (addons-307023) DBG | About to run SSH command:
	I0528 20:22:37.278864   12512 main.go:141] libmachine: (addons-307023) DBG | exit 0
	I0528 20:22:37.410138   12512 main.go:141] libmachine: (addons-307023) DBG | SSH cmd err, output: <nil>: 
	I0528 20:22:37.410422   12512 main.go:141] libmachine: (addons-307023) KVM machine creation complete!
	I0528 20:22:37.410742   12512 main.go:141] libmachine: (addons-307023) Calling .GetConfigRaw
	I0528 20:22:37.418632   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:37.418838   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:37.419005   12512 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0528 20:22:37.419017   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:22:37.420185   12512 main.go:141] libmachine: Detecting operating system of created instance...
	I0528 20:22:37.420201   12512 main.go:141] libmachine: Waiting for SSH to be available...
	I0528 20:22:37.420209   12512 main.go:141] libmachine: Getting to WaitForSSH function...
	I0528 20:22:37.420217   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:37.422444   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.422765   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.422793   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.422896   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:37.423051   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.423182   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.423333   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:37.423478   12512 main.go:141] libmachine: Using SSH client type: native
	I0528 20:22:37.423658   12512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0528 20:22:37.423673   12512 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0528 20:22:37.529140   12512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:22:37.529163   12512 main.go:141] libmachine: Detecting the provisioner...
	I0528 20:22:37.529172   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:37.531832   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.532181   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.532207   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.532352   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:37.532563   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.532748   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.532926   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:37.533115   12512 main.go:141] libmachine: Using SSH client type: native
	I0528 20:22:37.533302   12512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0528 20:22:37.533314   12512 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0528 20:22:37.642437   12512 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0528 20:22:37.642548   12512 main.go:141] libmachine: found compatible host: buildroot
	I0528 20:22:37.642568   12512 main.go:141] libmachine: Provisioning with buildroot...
	I0528 20:22:37.642581   12512 main.go:141] libmachine: (addons-307023) Calling .GetMachineName
	I0528 20:22:37.642813   12512 buildroot.go:166] provisioning hostname "addons-307023"
	I0528 20:22:37.642834   12512 main.go:141] libmachine: (addons-307023) Calling .GetMachineName
	I0528 20:22:37.643040   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:37.645636   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.646019   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.646159   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.646394   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:37.646608   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.646785   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.646898   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:37.647091   12512 main.go:141] libmachine: Using SSH client type: native
	I0528 20:22:37.647376   12512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0528 20:22:37.647398   12512 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-307023 && echo "addons-307023" | sudo tee /etc/hostname
	I0528 20:22:37.768968   12512 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-307023
	
	I0528 20:22:37.768993   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:37.772094   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.772460   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.772490   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.772679   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:37.772874   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.773062   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:37.773228   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:37.773415   12512 main.go:141] libmachine: Using SSH client type: native
	I0528 20:22:37.773621   12512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0528 20:22:37.773642   12512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-307023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-307023/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-307023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 20:22:37.887457   12512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:22:37.887485   12512 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 20:22:37.887526   12512 buildroot.go:174] setting up certificates
	I0528 20:22:37.887535   12512 provision.go:84] configureAuth start
	I0528 20:22:37.887545   12512 main.go:141] libmachine: (addons-307023) Calling .GetMachineName
	I0528 20:22:37.887808   12512 main.go:141] libmachine: (addons-307023) Calling .GetIP
	I0528 20:22:37.890652   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.890973   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.891011   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.891169   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:37.893216   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.893587   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:37.893610   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:37.893663   12512 provision.go:143] copyHostCerts
	I0528 20:22:37.893737   12512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 20:22:37.893877   12512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 20:22:37.893942   12512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 20:22:37.894022   12512 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.addons-307023 san=[127.0.0.1 192.168.39.230 addons-307023 localhost minikube]
	I0528 20:22:38.032283   12512 provision.go:177] copyRemoteCerts
	I0528 20:22:38.032333   12512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 20:22:38.032355   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:38.035593   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.035915   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.035978   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.036202   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:38.036374   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.036526   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:38.036686   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:22:38.120361   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 20:22:38.145488   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 20:22:38.170707   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 20:22:38.194342   12512 provision.go:87] duration metric: took 306.796879ms to configureAuth
	I0528 20:22:38.194369   12512 buildroot.go:189] setting minikube options for container-runtime
	I0528 20:22:38.194565   12512 config.go:182] Loaded profile config "addons-307023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:22:38.194648   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:38.197722   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.198093   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.198124   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.198418   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:38.198626   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.198807   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.198908   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:38.199075   12512 main.go:141] libmachine: Using SSH client type: native
	I0528 20:22:38.199246   12512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0528 20:22:38.199260   12512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 20:22:38.461596   12512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 20:22:38.461619   12512 main.go:141] libmachine: Checking connection to Docker...
	I0528 20:22:38.461627   12512 main.go:141] libmachine: (addons-307023) Calling .GetURL
	I0528 20:22:38.462953   12512 main.go:141] libmachine: (addons-307023) DBG | Using libvirt version 6000000
	I0528 20:22:38.465537   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.465908   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.465936   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.466238   12512 main.go:141] libmachine: Docker is up and running!
	I0528 20:22:38.466263   12512 main.go:141] libmachine: Reticulating splines...
	I0528 20:22:38.466270   12512 client.go:171] duration metric: took 22.085566975s to LocalClient.Create
	I0528 20:22:38.466286   12512 start.go:167] duration metric: took 22.085643295s to libmachine.API.Create "addons-307023"
	I0528 20:22:38.466293   12512 start.go:293] postStartSetup for "addons-307023" (driver="kvm2")
	I0528 20:22:38.466302   12512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 20:22:38.466318   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:38.466521   12512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 20:22:38.466549   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:38.468880   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.469195   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.469231   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.469363   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:38.469548   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.469687   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:38.469840   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:22:38.551927   12512 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 20:22:38.556169   12512 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 20:22:38.556192   12512 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 20:22:38.556265   12512 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 20:22:38.556293   12512 start.go:296] duration metric: took 89.994852ms for postStartSetup
	I0528 20:22:38.556326   12512 main.go:141] libmachine: (addons-307023) Calling .GetConfigRaw
	I0528 20:22:38.556878   12512 main.go:141] libmachine: (addons-307023) Calling .GetIP
	I0528 20:22:38.559569   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.559936   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.559965   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.560189   12512 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/config.json ...
	I0528 20:22:38.560367   12512 start.go:128] duration metric: took 22.196477548s to createHost
	I0528 20:22:38.560387   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:38.562784   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.563155   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.563180   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.563338   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:38.563527   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.563700   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.563868   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:38.564036   12512 main.go:141] libmachine: Using SSH client type: native
	I0528 20:22:38.564209   12512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0528 20:22:38.564226   12512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 20:22:38.670636   12512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716927758.641434286
	
	I0528 20:22:38.670661   12512 fix.go:216] guest clock: 1716927758.641434286
	I0528 20:22:38.670671   12512 fix.go:229] Guest: 2024-05-28 20:22:38.641434286 +0000 UTC Remote: 2024-05-28 20:22:38.56037762 +0000 UTC m=+22.294392597 (delta=81.056666ms)
	I0528 20:22:38.670696   12512 fix.go:200] guest clock delta is within tolerance: 81.056666ms
	I0528 20:22:38.670703   12512 start.go:83] releasing machines lock for "addons-307023", held for 22.306877351s
	I0528 20:22:38.670730   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:38.670994   12512 main.go:141] libmachine: (addons-307023) Calling .GetIP
	I0528 20:22:38.673545   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.673995   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.674022   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.674159   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:38.674595   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:38.674753   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:22:38.674845   12512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 20:22:38.674884   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:38.674928   12512 ssh_runner.go:195] Run: cat /version.json
	I0528 20:22:38.674955   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:22:38.677889   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.678112   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.678443   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.678472   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.678535   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:38.678573   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:38.678651   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:38.678832   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:22:38.678894   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.678986   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:22:38.679060   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:38.679323   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:22:38.679352   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:22:38.679499   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:22:38.764221   12512 ssh_runner.go:195] Run: systemctl --version
	I0528 20:22:38.792258   12512 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 20:22:38.961195   12512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 20:22:38.967494   12512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 20:22:38.967551   12512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 20:22:38.987030   12512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 20:22:38.987051   12512 start.go:494] detecting cgroup driver to use...
	I0528 20:22:38.987113   12512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 20:22:39.007816   12512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 20:22:39.024178   12512 docker.go:217] disabling cri-docker service (if available) ...
	I0528 20:22:39.024240   12512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 20:22:39.040535   12512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 20:22:39.056974   12512 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 20:22:39.187646   12512 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 20:22:39.329537   12512 docker.go:233] disabling docker service ...
	I0528 20:22:39.329623   12512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 20:22:39.344137   12512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 20:22:39.357150   12512 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 20:22:39.502803   12512 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 20:22:39.631687   12512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 20:22:39.645878   12512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 20:22:39.664157   12512 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 20:22:39.664235   12512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.675029   12512 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 20:22:39.675086   12512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.685890   12512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.696429   12512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.706996   12512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 20:22:39.717594   12512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.728286   12512 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.745432   12512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:22:39.756106   12512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 20:22:39.765607   12512 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 20:22:39.765666   12512 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 20:22:39.779281   12512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 20:22:39.789067   12512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:22:39.912673   12512 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 20:22:40.050327   12512 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 20:22:40.050408   12512 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 20:22:40.055035   12512 start.go:562] Will wait 60s for crictl version
	I0528 20:22:40.055097   12512 ssh_runner.go:195] Run: which crictl
	I0528 20:22:40.058958   12512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 20:22:40.097593   12512 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 20:22:40.097729   12512 ssh_runner.go:195] Run: crio --version
	I0528 20:22:40.125486   12512 ssh_runner.go:195] Run: crio --version
	I0528 20:22:40.158285   12512 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 20:22:40.159618   12512 main.go:141] libmachine: (addons-307023) Calling .GetIP
	I0528 20:22:40.162473   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:40.162822   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:22:40.162851   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:22:40.163047   12512 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 20:22:40.167411   12512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:22:40.180365   12512 kubeadm.go:877] updating cluster {Name:addons-307023 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-307023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 20:22:40.180486   12512 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:22:40.180529   12512 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 20:22:40.212715   12512 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0528 20:22:40.212793   12512 ssh_runner.go:195] Run: which lz4
	I0528 20:22:40.217038   12512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 20:22:40.221447   12512 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 20:22:40.221475   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0528 20:22:41.541443   12512 crio.go:462] duration metric: took 1.324445637s to copy over tarball
	I0528 20:22:41.541511   12512 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 20:22:43.770641   12512 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.229100733s)
	I0528 20:22:43.770675   12512 crio.go:469] duration metric: took 2.229208312s to extract the tarball
	I0528 20:22:43.770682   12512 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 20:22:43.808573   12512 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 20:22:43.851141   12512 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 20:22:43.851164   12512 cache_images.go:84] Images are preloaded, skipping loading
	I0528 20:22:43.851171   12512 kubeadm.go:928] updating node { 192.168.39.230 8443 v1.30.1 crio true true} ...
	I0528 20:22:43.851267   12512 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-307023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-307023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 20:22:43.851325   12512 ssh_runner.go:195] Run: crio config
	I0528 20:22:43.898904   12512 cni.go:84] Creating CNI manager for ""
	I0528 20:22:43.898928   12512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 20:22:43.898940   12512 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 20:22:43.898968   12512 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-307023 NodeName:addons-307023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 20:22:43.899105   12512 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-307023"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 20:22:43.899162   12512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 20:22:43.909685   12512 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 20:22:43.909752   12512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 20:22:43.919534   12512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0528 20:22:43.936893   12512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 20:22:43.953778   12512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0528 20:22:43.970704   12512 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0528 20:22:43.974802   12512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:22:43.987166   12512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:22:44.111756   12512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:22:44.129559   12512 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023 for IP: 192.168.39.230
	I0528 20:22:44.129578   12512 certs.go:194] generating shared ca certs ...
	I0528 20:22:44.129593   12512 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.129728   12512 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 20:22:44.304591   12512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt ...
	I0528 20:22:44.304617   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt: {Name:mkf12219490495734c93ec1a852db4cdd558f74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.304799   12512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key ...
	I0528 20:22:44.304817   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key: {Name:mk6f16953334bbe6cb1ef60b5d82f2adc64cf131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.304916   12512 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 20:22:44.563093   12512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt ...
	I0528 20:22:44.563125   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt: {Name:mk26fe5087377e64623e3b97df2d91a014dc6cff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.563294   12512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key ...
	I0528 20:22:44.563310   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key: {Name:mk9016fed3ac742477d4dd344b94def9b07486f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.563405   12512 certs.go:256] generating profile certs ...
	I0528 20:22:44.563469   12512 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.key
	I0528 20:22:44.563488   12512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt with IP's: []
	I0528 20:22:44.789228   12512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt ...
	I0528 20:22:44.789262   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: {Name:mk8081754a912d12b3b37a8bb3f19ba0a05b95d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.789436   12512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.key ...
	I0528 20:22:44.789447   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.key: {Name:mk904f655e4f408646229d0357f533e8ac438914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.789515   12512 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.key.25a98af5
	I0528 20:22:44.789533   12512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.crt.25a98af5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.230]
	I0528 20:22:44.881091   12512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.crt.25a98af5 ...
	I0528 20:22:44.881123   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.crt.25a98af5: {Name:mk77497c8eb56a50e975cffeb9c1ba646e4de9b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.881283   12512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.key.25a98af5 ...
	I0528 20:22:44.881297   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.key.25a98af5: {Name:mk1a691ced54247f535d479d0911900c03983ca6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.881362   12512 certs.go:381] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.crt.25a98af5 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.crt
	I0528 20:22:44.881427   12512 certs.go:385] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.key.25a98af5 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.key
	I0528 20:22:44.881475   12512 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.key
	I0528 20:22:44.881491   12512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.crt with IP's: []
	I0528 20:22:44.971782   12512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.crt ...
	I0528 20:22:44.971818   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.crt: {Name:mk522e279cdecac94035a78ba55093e7ea0233ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.971983   12512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.key ...
	I0528 20:22:44.971994   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.key: {Name:mkc2a01f45a46df7e3eb50b70f86bb7a229ad840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:22:44.972146   12512 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 20:22:44.972178   12512 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 20:22:44.972199   12512 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 20:22:44.972222   12512 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 20:22:44.972727   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 20:22:45.029934   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 20:22:45.056736   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 20:22:45.080415   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 20:22:45.104571   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0528 20:22:45.128462   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 20:22:45.152463   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 20:22:45.176387   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 20:22:45.200966   12512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 20:22:45.229791   12512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 20:22:45.248613   12512 ssh_runner.go:195] Run: openssl version
	I0528 20:22:45.254908   12512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 20:22:45.265897   12512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:22:45.270624   12512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:22:45.270681   12512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:22:45.276613   12512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 20:22:45.287371   12512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 20:22:45.291737   12512 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 20:22:45.291788   12512 kubeadm.go:391] StartCluster: {Name:addons-307023 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-307023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:22:45.291878   12512 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 20:22:45.291947   12512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 20:22:45.336082   12512 cri.go:89] found id: ""
	I0528 20:22:45.336151   12512 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 20:22:45.346154   12512 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 20:22:45.356088   12512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 20:22:45.365805   12512 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 20:22:45.365835   12512 kubeadm.go:156] found existing configuration files:
	
	I0528 20:22:45.365884   12512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 20:22:45.375013   12512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 20:22:45.375063   12512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 20:22:45.384434   12512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 20:22:45.393549   12512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 20:22:45.393612   12512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 20:22:45.403609   12512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 20:22:45.413189   12512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 20:22:45.413252   12512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 20:22:45.423165   12512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 20:22:45.432522   12512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 20:22:45.432577   12512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 20:22:45.442848   12512 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 20:22:45.499395   12512 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 20:22:45.499476   12512 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 20:22:45.628359   12512 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 20:22:45.628475   12512 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 20:22:45.628585   12512 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 20:22:45.837919   12512 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 20:22:46.095484   12512 out.go:204]   - Generating certificates and keys ...
	I0528 20:22:46.095628   12512 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 20:22:46.095757   12512 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 20:22:46.095923   12512 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 20:22:46.196705   12512 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 20:22:46.664394   12512 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 20:22:46.826723   12512 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 20:22:46.980540   12512 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 20:22:46.980742   12512 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-307023 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I0528 20:22:47.128366   12512 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 20:22:47.128553   12512 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-307023 localhost] and IPs [192.168.39.230 127.0.0.1 ::1]
	I0528 20:22:47.457912   12512 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 20:22:47.508317   12512 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 20:22:47.761559   12512 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 20:22:47.761624   12512 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 20:22:47.872355   12512 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 20:22:48.075763   12512 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 20:22:48.234986   12512 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 20:22:48.422832   12512 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 20:22:48.588090   12512 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 20:22:48.588697   12512 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 20:22:48.592972   12512 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 20:22:48.619220   12512 out.go:204]   - Booting up control plane ...
	I0528 20:22:48.619338   12512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 20:22:48.619427   12512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 20:22:48.619518   12512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 20:22:48.619670   12512 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 20:22:48.619792   12512 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 20:22:48.619874   12512 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 20:22:48.758621   12512 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 20:22:48.758699   12512 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 20:22:49.259273   12512 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.094252ms
	I0528 20:22:49.259387   12512 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 20:22:54.758266   12512 kubeadm.go:309] [api-check] The API server is healthy after 5.501922128s
	I0528 20:22:54.774818   12512 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 20:22:54.793326   12512 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 20:22:54.822075   12512 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 20:22:54.822317   12512 kubeadm.go:309] [mark-control-plane] Marking the node addons-307023 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 20:22:54.833078   12512 kubeadm.go:309] [bootstrap-token] Using token: dnpxo0.wrjqml256vgz5hhv
	I0528 20:22:54.834464   12512 out.go:204]   - Configuring RBAC rules ...
	I0528 20:22:54.834613   12512 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 20:22:54.839099   12512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 20:22:54.849731   12512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 20:22:54.853625   12512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 20:22:54.857161   12512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 20:22:54.860972   12512 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 20:22:55.165222   12512 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 20:22:55.607599   12512 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 20:22:56.164530   12512 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 20:22:56.164556   12512 kubeadm.go:309] 
	I0528 20:22:56.164620   12512 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 20:22:56.164632   12512 kubeadm.go:309] 
	I0528 20:22:56.164721   12512 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 20:22:56.164732   12512 kubeadm.go:309] 
	I0528 20:22:56.164777   12512 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 20:22:56.164883   12512 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 20:22:56.164964   12512 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 20:22:56.164974   12512 kubeadm.go:309] 
	I0528 20:22:56.165036   12512 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 20:22:56.165048   12512 kubeadm.go:309] 
	I0528 20:22:56.165102   12512 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 20:22:56.165111   12512 kubeadm.go:309] 
	I0528 20:22:56.165180   12512 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 20:22:56.165296   12512 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 20:22:56.165389   12512 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 20:22:56.165405   12512 kubeadm.go:309] 
	I0528 20:22:56.165478   12512 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 20:22:56.165544   12512 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 20:22:56.165551   12512 kubeadm.go:309] 
	I0528 20:22:56.165618   12512 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token dnpxo0.wrjqml256vgz5hhv \
	I0528 20:22:56.165714   12512 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb \
	I0528 20:22:56.165733   12512 kubeadm.go:309] 	--control-plane 
	I0528 20:22:56.165739   12512 kubeadm.go:309] 
	I0528 20:22:56.165868   12512 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 20:22:56.165885   12512 kubeadm.go:309] 
	I0528 20:22:56.166007   12512 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token dnpxo0.wrjqml256vgz5hhv \
	I0528 20:22:56.166147   12512 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb 
	I0528 20:22:56.166401   12512 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 20:22:56.166427   12512 cni.go:84] Creating CNI manager for ""
	I0528 20:22:56.166435   12512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 20:22:56.168305   12512 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 20:22:56.169640   12512 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 20:22:56.180413   12512 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 20:22:56.199324   12512 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 20:22:56.199403   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:56.199463   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-307023 minikube.k8s.io/updated_at=2024_05_28T20_22_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=addons-307023 minikube.k8s.io/primary=true
	I0528 20:22:56.222548   12512 ops.go:34] apiserver oom_adj: -16
	I0528 20:22:56.336687   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:56.837360   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:57.337005   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:57.837162   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:58.336867   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:58.837058   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:59.337480   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:22:59.837523   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:00.336774   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:00.836910   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:01.337664   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:01.836927   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:02.337625   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:02.837710   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:03.337117   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:03.837064   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:04.337743   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:04.836799   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:05.337104   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:05.836943   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:06.337349   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:06.837365   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:07.337549   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:07.836707   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:08.337681   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:08.837464   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:09.336953   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:09.836789   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:10.336885   12512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:23:10.427885   12512 kubeadm.go:1107] duration metric: took 14.228541597s to wait for elevateKubeSystemPrivileges
	W0528 20:23:10.427930   12512 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 20:23:10.427941   12512 kubeadm.go:393] duration metric: took 25.136155888s to StartCluster
	I0528 20:23:10.427960   12512 settings.go:142] acquiring lock: {Name:mkc1182212d780ab38539839372c25eb989b2e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:23:10.428087   12512 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:23:10.428544   12512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:23:10.428741   12512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0528 20:23:10.428765   12512 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:23:10.430753   12512 out.go:177] * Verifying Kubernetes components...
	I0528 20:23:10.428826   12512 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0528 20:23:10.428927   12512 config.go:182] Loaded profile config "addons-307023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:23:10.432118   12512 addons.go:69] Setting yakd=true in profile "addons-307023"
	I0528 20:23:10.432130   12512 addons.go:69] Setting inspektor-gadget=true in profile "addons-307023"
	I0528 20:23:10.432154   12512 addons.go:69] Setting storage-provisioner=true in profile "addons-307023"
	I0528 20:23:10.432165   12512 addons.go:234] Setting addon inspektor-gadget=true in "addons-307023"
	I0528 20:23:10.432168   12512 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-307023"
	I0528 20:23:10.432170   12512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:23:10.432183   12512 addons.go:69] Setting metrics-server=true in profile "addons-307023"
	I0528 20:23:10.432190   12512 addons.go:69] Setting gcp-auth=true in profile "addons-307023"
	I0528 20:23:10.432200   12512 addons.go:69] Setting volcano=true in profile "addons-307023"
	I0528 20:23:10.432206   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432218   12512 addons.go:69] Setting registry=true in profile "addons-307023"
	I0528 20:23:10.432224   12512 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-307023"
	I0528 20:23:10.432226   12512 addons.go:69] Setting volumesnapshots=true in profile "addons-307023"
	I0528 20:23:10.432178   12512 addons.go:234] Setting addon storage-provisioner=true in "addons-307023"
	I0528 20:23:10.432243   12512 addons.go:234] Setting addon volumesnapshots=true in "addons-307023"
	I0528 20:23:10.432247   12512 addons.go:69] Setting default-storageclass=true in profile "addons-307023"
	I0528 20:23:10.432240   12512 addons.go:69] Setting ingress=true in profile "addons-307023"
	I0528 20:23:10.432266   12512 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-307023"
	I0528 20:23:10.432266   12512 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-307023"
	I0528 20:23:10.432272   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432274   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432283   12512 addons.go:234] Setting addon ingress=true in "addons-307023"
	I0528 20:23:10.432290   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432321   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432191   12512 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-307023"
	I0528 20:23:10.432674   12512 addons.go:69] Setting helm-tiller=true in profile "addons-307023"
	I0528 20:23:10.432679   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.432698   12512 addons.go:234] Setting addon helm-tiller=true in "addons-307023"
	I0528 20:23:10.432713   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.432719   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432218   12512 addons.go:234] Setting addon volcano=true in "addons-307023"
	I0528 20:23:10.432673   12512 addons.go:69] Setting ingress-dns=true in profile "addons-307023"
	I0528 20:23:10.432792   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432810   12512 addons.go:234] Setting addon ingress-dns=true in "addons-307023"
	I0528 20:23:10.432848   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.433034   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433057   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433060   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.433074   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.433092   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433108   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.433113   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.432209   12512 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-307023"
	I0528 20:23:10.433129   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.433143   12512 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-307023"
	I0528 20:23:10.432201   12512 addons.go:234] Setting addon metrics-server=true in "addons-307023"
	I0528 20:23:10.432210   12512 mustload.go:65] Loading cluster: addons-307023
	I0528 20:23:10.432216   12512 addons.go:69] Setting cloud-spanner=true in profile "addons-307023"
	I0528 20:23:10.433293   12512 addons.go:234] Setting addon cloud-spanner=true in "addons-307023"
	I0528 20:23:10.433321   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.433360   12512 config.go:182] Loaded profile config "addons-307023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:23:10.433425   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.432242   12512 addons.go:234] Setting addon registry=true in "addons-307023"
	I0528 20:23:10.433632   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.433677   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433720   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.433680   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433803   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.432663   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433854   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.432665   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.433906   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.433930   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.432158   12512 addons.go:234] Setting addon yakd=true in "addons-307023"
	I0528 20:23:10.432669   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.432669   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.434050   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.434061   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.432663   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.434090   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.434125   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.434135   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.434100   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.434274   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.434471   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.454031   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0528 20:23:10.454068   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46781
	I0528 20:23:10.454103   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44853
	I0528 20:23:10.454444   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.454498   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.454544   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I0528 20:23:10.454886   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.455063   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.455081   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.455139   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.455148   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.455157   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.455465   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.455469   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.455702   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.455722   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.456012   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.456060   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.456169   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.458104   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.458119   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.458169   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34781
	I0528 20:23:10.458554   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.458908   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.459094   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.459456   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.459466   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.459663   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.459754   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.462264   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.462658   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.462775   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.464641   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42345
	I0528 20:23:10.466058   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.466096   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.466358   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.466394   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.466879   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.466899   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.466961   12512 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-307023"
	I0528 20:23:10.467009   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.467365   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.467402   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.467622   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.467657   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.474151   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I0528 20:23:10.474695   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.475268   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.475649   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.475672   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.476177   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.476193   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.476337   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.476868   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.476915   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.477479   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.478084   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.478134   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.502516   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34847
	I0528 20:23:10.503146   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.503228   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0528 20:23:10.503310   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40083
	I0528 20:23:10.503672   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.503689   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.503750   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.503772   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.504172   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.504199   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.504302   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.504324   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.504660   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.504668   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.505209   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.505246   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.505250   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.505278   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.505643   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0528 20:23:10.505797   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.505825   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0528 20:23:10.506035   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.507541   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.507794   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.508037   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.508052   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.509828   12512 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0528 20:23:10.508390   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.508845   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40831
	I0528 20:23:10.511367   12512 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0528 20:23:10.511381   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0528 20:23:10.511398   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.512054   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.512098   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.512573   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.513035   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.513058   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.513360   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.513875   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.513913   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.514319   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.514651   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.514668   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.514874   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.515066   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.515225   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.515425   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.518162   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.518755   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.518770   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.518827   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33807
	I0528 20:23:10.519304   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.519700   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.519733   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.519935   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.520355   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.520374   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.520430   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
	I0528 20:23:10.520848   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.521014   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.522126   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0528 20:23:10.522534   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.523010   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.523026   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.523376   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.523902   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.523939   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.524806   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.526859   12512 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0528 20:23:10.525444   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.526753   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0528 20:23:10.528374   12512 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0528 20:23:10.528387   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0528 20:23:10.528405   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.529209   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.529229   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.529587   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.529666   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.529847   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.530069   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0528 20:23:10.530502   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.530748   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.530764   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.530942   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.530955   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.531013   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40701
	I0528 20:23:10.531408   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.531603   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.531799   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.531853   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.531966   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.531977   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.532365   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.532417   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.532546   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.534176   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.534221   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.534464   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.534564   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.534583   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.536093   12512 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0528 20:23:10.537276   12512 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0528 20:23:10.537295   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0528 20:23:10.537312   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.536183   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.534845   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.536216   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.537554   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.537742   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.537843   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0528 20:23:10.538010   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.538164   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:10.538174   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:10.541031   12512 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0528 20:23:10.538465   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:10.538490   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:10.538601   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.540563   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
	I0528 20:23:10.540601   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.541133   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.542693   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I0528 20:23:10.544877   12512 out.go:177]   - Using image docker.io/registry:2.8.3
	I0528 20:23:10.543373   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33251
	I0528 20:23:10.543618   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:10.545003   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:10.545012   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:10.543655   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.545065   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.543985   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.544120   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.544254   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.545182   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.544589   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.548079   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.548167   12512 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0528 20:23:10.548188   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0528 20:23:10.545313   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:10.545335   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:10.548211   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.548227   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:10.545406   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.545525   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.547298   12512 main.go:141] libmachine: Using API Version  1
	W0528 20:23:10.548304   12512 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0528 20:23:10.548313   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.547586   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45531
	I0528 20:23:10.547725   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.548424   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.548482   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.548679   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.550302   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36343
	I0528 20:23:10.550477   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.550725   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.551002   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.551132   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.551149   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.551948   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.551968   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.552962   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45345
	I0528 20:23:10.553389   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.553929   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.554126   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.554167   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.554265   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.554373   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.554394   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.554426   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.554493   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.554696   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.554712   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.554724   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.554749   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.554929   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.556730   12512 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0528 20:23:10.554727   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.555145   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.555724   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.556233   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.556356   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.558125   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.558180   12512 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0528 20:23:10.558198   12512 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0528 20:23:10.558230   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.558450   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.558740   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.558763   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.558810   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.559312   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.559355   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.562411   12512 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.1
	I0528 20:23:10.559542   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.561847   12512 addons.go:234] Setting addon default-storageclass=true in "addons-307023"
	I0528 20:23:10.562039   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.562600   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.563582   12512 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0528 20:23:10.563596   12512 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0528 20:23:10.563198   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43005
	I0528 20:23:10.563610   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.563674   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:10.563716   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.563738   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.564024   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.564050   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.564589   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.564816   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.564977   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.565482   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.565669   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37547
	I0528 20:23:10.567114   12512 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0528 20:23:10.566026   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.566281   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.568251   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.569832   12512 out.go:177]   - Using image docker.io/busybox:stable
	I0528 20:23:10.568629   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.568641   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44153
	I0528 20:23:10.568937   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.569084   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.569084   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.570316   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34443
	I0528 20:23:10.571170   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.571197   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.571254   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.571345   12512 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0528 20:23:10.571357   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0528 20:23:10.571374   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.571403   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.571533   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.571600   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.571736   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.572014   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.572071   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.572203   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.572874   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.572948   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.573644   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.573665   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.573952   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.573974   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.574448   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.574654   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.576143   12512 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 20:23:10.574998   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.575306   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.576099   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.576724   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.577484   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0528 20:23:10.578723   12512 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0528 20:23:10.578742   12512 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0528 20:23:10.578761   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.577510   12512 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 20:23:10.578797   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 20:23:10.578812   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.577546   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.578868   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.577802   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.577806   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.577834   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.578703   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34787
	I0528 20:23:10.579107   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.579378   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.579452   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.580120   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.580136   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.580827   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.581050   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.581226   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.583095   12512 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0528 20:23:10.583117   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.584608   12512 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0528 20:23:10.584621   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0528 20:23:10.584639   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.586170   12512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0528 20:23:10.583528   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.583553   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.584044   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.584266   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.584829   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.587602   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.587629   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.588808   12512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 20:23:10.587699   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.587758   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.587817   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.588275   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.588891   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.589831   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.591900   12512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 20:23:10.590032   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.590061   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.590820   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.590836   12512 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0528 20:23:10.591039   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.592989   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39947
	I0528 20:23:10.593007   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0528 20:23:10.594344   12512 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0528 20:23:10.594359   12512 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0528 20:23:10.594373   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.593294   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.593294   12512 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0528 20:23:10.594418   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0528 20:23:10.594429   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.593435   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.593479   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.593487   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.593593   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.593678   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.595313   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.595399   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.595418   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.595644   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.595703   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.595759   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.596661   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:10.596833   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:10.597103   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.597377   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.598018   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.598157   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.598293   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.598321   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.598467   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.598617   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.598636   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.598660   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.598767   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.598838   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.598995   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.599096   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.599259   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.599315   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.599510   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:10.601019   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0528 20:23:10.602470   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0528 20:23:10.603727   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0528 20:23:10.604920   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0528 20:23:10.606129   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0528 20:23:10.607284   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0528 20:23:10.608612   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0528 20:23:10.609959   12512 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0528 20:23:10.611260   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0528 20:23:10.611281   12512 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0528 20:23:10.611306   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.614756   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.615168   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.615195   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.615336   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.615538   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.615673   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.615824   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	W0528 20:23:10.627769   12512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50730->192.168.39.230:22: read: connection reset by peer
	I0528 20:23:10.627802   12512 retry.go:31] will retry after 342.148262ms: ssh: handshake failed: read tcp 192.168.39.1:50730->192.168.39.230:22: read: connection reset by peer
	W0528 20:23:10.627869   12512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50732->192.168.39.230:22: read: connection reset by peer
	I0528 20:23:10.627881   12512 retry.go:31] will retry after 154.623703ms: ssh: handshake failed: read tcp 192.168.39.1:50732->192.168.39.230:22: read: connection reset by peer
	W0528 20:23:10.627994   12512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50748->192.168.39.230:22: read: connection reset by peer
	I0528 20:23:10.628019   12512 retry.go:31] will retry after 154.109106ms: ssh: handshake failed: read tcp 192.168.39.1:50748->192.168.39.230:22: read: connection reset by peer
	I0528 20:23:10.641978   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0528 20:23:10.642431   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:10.642922   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:10.642937   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:10.643293   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:10.643471   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:10.645452   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:10.645842   12512 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 20:23:10.645861   12512 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 20:23:10.645879   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:10.648979   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.649425   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:10.649451   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:10.649625   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:10.649825   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:10.650008   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:10.650152   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	W0528 20:23:10.653783   12512 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50754->192.168.39.230:22: read: connection reset by peer
	I0528 20:23:10.653807   12512 retry.go:31] will retry after 167.254965ms: ssh: handshake failed: read tcp 192.168.39.1:50754->192.168.39.230:22: read: connection reset by peer
	I0528 20:23:10.968648   12512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:23:10.968978   12512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0528 20:23:10.983115   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0528 20:23:11.000531   12512 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0528 20:23:11.000551   12512 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0528 20:23:11.018403   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0528 20:23:11.085444   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0528 20:23:11.110037   12512 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0528 20:23:11.110056   12512 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0528 20:23:11.116782   12512 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0528 20:23:11.116797   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0528 20:23:11.140739   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 20:23:11.156167   12512 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0528 20:23:11.156192   12512 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0528 20:23:11.158811   12512 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0528 20:23:11.158836   12512 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0528 20:23:11.180971   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0528 20:23:11.182035   12512 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0528 20:23:11.182053   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0528 20:23:11.233563   12512 node_ready.go:35] waiting up to 6m0s for node "addons-307023" to be "Ready" ...
	I0528 20:23:11.239265   12512 node_ready.go:49] node "addons-307023" has status "Ready":"True"
	I0528 20:23:11.239298   12512 node_ready.go:38] duration metric: took 5.695157ms for node "addons-307023" to be "Ready" ...
	I0528 20:23:11.239311   12512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:23:11.250195   12512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hmjmn" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:11.292822   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0528 20:23:11.292847   12512 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0528 20:23:11.311403   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 20:23:11.320200   12512 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0528 20:23:11.320232   12512 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0528 20:23:11.379938   12512 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0528 20:23:11.379964   12512 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0528 20:23:11.383564   12512 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0528 20:23:11.383582   12512 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0528 20:23:11.385897   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0528 20:23:11.394584   12512 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0528 20:23:11.394611   12512 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0528 20:23:11.415311   12512 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0528 20:23:11.415337   12512 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0528 20:23:11.469500   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0528 20:23:11.469522   12512 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0528 20:23:11.477009   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0528 20:23:11.540329   12512 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0528 20:23:11.540363   12512 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0528 20:23:11.544803   12512 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 20:23:11.544822   12512 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0528 20:23:11.620898   12512 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0528 20:23:11.620931   12512 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0528 20:23:11.639257   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0528 20:23:11.711000   12512 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0528 20:23:11.711022   12512 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0528 20:23:11.713993   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0528 20:23:11.714009   12512 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0528 20:23:11.742124   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0528 20:23:11.742151   12512 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0528 20:23:11.780020   12512 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0528 20:23:11.780043   12512 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0528 20:23:11.834451   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 20:23:11.884374   12512 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0528 20:23:11.884396   12512 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0528 20:23:11.956177   12512 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 20:23:11.956217   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0528 20:23:12.008160   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0528 20:23:12.008185   12512 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0528 20:23:12.070468   12512 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0528 20:23:12.070487   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0528 20:23:12.159915   12512 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0528 20:23:12.159936   12512 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0528 20:23:12.315185   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 20:23:12.341966   12512 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0528 20:23:12.341991   12512 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0528 20:23:12.438960   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0528 20:23:12.633406   12512 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0528 20:23:12.633436   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0528 20:23:12.647501   12512 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0528 20:23:12.647528   12512 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0528 20:23:12.878290   12512 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0528 20:23:12.878314   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0528 20:23:12.937778   12512 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0528 20:23:12.937808   12512 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0528 20:23:13.221393   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0528 20:23:13.256685   12512 pod_ready.go:102] pod "coredns-7db6d8ff4d-hmjmn" in "kube-system" namespace has status "Ready":"False"
	I0528 20:23:13.268417   12512 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0528 20:23:13.268436   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0528 20:23:13.412451   12512 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0528 20:23:13.412476   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0528 20:23:13.819282   12512 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.850265429s)
	I0528 20:23:13.819313   12512 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0528 20:23:13.819322   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.836171727s)
	I0528 20:23:13.819368   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:13.819382   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:13.819718   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:13.819739   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:13.819759   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:13.819845   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:13.819867   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:13.820200   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:13.820215   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:13.909469   12512 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0528 20:23:13.909492   12512 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0528 20:23:14.218739   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0528 20:23:14.333140   12512 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-307023" context rescaled to 1 replicas
	I0528 20:23:15.261589   12512 pod_ready.go:102] pod "coredns-7db6d8ff4d-hmjmn" in "kube-system" namespace has status "Ready":"False"
	I0528 20:23:15.734878   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.716428977s)
	I0528 20:23:15.734945   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.734945   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.649468195s)
	I0528 20:23:15.734961   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.734985   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.734996   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.594231216s)
	I0528 20:23:15.735003   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735025   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735036   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735049   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.554047986s)
	I0528 20:23:15.735069   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735076   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.423647935s)
	I0528 20:23:15.735081   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735095   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735105   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735115   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.349193179s)
	I0528 20:23:15.735134   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735147   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735474   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.735488   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.735497   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735505   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735555   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.735562   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.735570   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735577   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.735858   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.735889   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.735895   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.735903   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.735910   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.736021   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.736062   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.736068   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.736075   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.736081   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.736183   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.736222   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.736227   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.736266   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.736280   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.736289   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.736302   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.736307   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.736312   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.736316   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.736364   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.736370   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.736377   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.736384   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.737131   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.737159   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.737165   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.737703   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.737729   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.737736   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.738069   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.738099   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.738107   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.738257   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:15.738283   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.738289   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.738348   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.738358   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.738366   12512 addons.go:475] Verifying addon registry=true in "addons-307023"
	I0528 20:23:15.740100   12512 out.go:177] * Verifying registry addon...
	I0528 20:23:15.742105   12512 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0528 20:23:15.859525   12512 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0528 20:23:15.859544   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:15.982573   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:15.982601   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:15.982887   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:15.982910   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:15.982918   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	W0528 20:23:15.983009   12512 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0528 20:23:16.008816   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:16.008839   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:16.009213   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:16.009234   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:16.274665   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:16.289932   12512 pod_ready.go:92] pod "coredns-7db6d8ff4d-hmjmn" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:16.289977   12512 pod_ready.go:81] duration metric: took 5.039759258s for pod "coredns-7db6d8ff4d-hmjmn" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.289990   12512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p4qdk" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.317839   12512 pod_ready.go:92] pod "coredns-7db6d8ff4d-p4qdk" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:16.317860   12512 pod_ready.go:81] duration metric: took 27.863115ms for pod "coredns-7db6d8ff4d-p4qdk" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.317869   12512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.351327   12512 pod_ready.go:92] pod "etcd-addons-307023" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:16.351352   12512 pod_ready.go:81] duration metric: took 33.469285ms for pod "etcd-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.351364   12512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.390763   12512 pod_ready.go:92] pod "kube-apiserver-addons-307023" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:16.390783   12512 pod_ready.go:81] duration metric: took 39.411236ms for pod "kube-apiserver-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.390793   12512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.416036   12512 pod_ready.go:92] pod "kube-controller-manager-addons-307023" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:16.416057   12512 pod_ready.go:81] duration metric: took 25.257529ms for pod "kube-controller-manager-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.416070   12512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zm9r7" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.663062   12512 pod_ready.go:92] pod "kube-proxy-zm9r7" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:16.663086   12512 pod_ready.go:81] duration metric: took 247.006121ms for pod "kube-proxy-zm9r7" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.663097   12512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:16.788485   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:17.055690   12512 pod_ready.go:92] pod "kube-scheduler-addons-307023" in "kube-system" namespace has status "Ready":"True"
	I0528 20:23:17.055719   12512 pod_ready.go:81] duration metric: took 392.614322ms for pod "kube-scheduler-addons-307023" in "kube-system" namespace to be "Ready" ...
	I0528 20:23:17.055730   12512 pod_ready.go:38] duration metric: took 5.816404218s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:23:17.055749   12512 api_server.go:52] waiting for apiserver process to appear ...
	I0528 20:23:17.055814   12512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:23:17.252721   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:17.567115   12512 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0528 20:23:17.567162   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:17.570676   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:17.571214   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:17.571245   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:17.571535   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:17.571746   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:17.571908   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:17.572044   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:17.748141   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:17.789839   12512 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0528 20:23:18.010045   12512 addons.go:234] Setting addon gcp-auth=true in "addons-307023"
	I0528 20:23:18.010099   12512 host.go:66] Checking if "addons-307023" exists ...
	I0528 20:23:18.010416   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:18.010448   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:18.024864   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0528 20:23:18.025355   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:18.025900   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:18.025924   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:18.026198   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:18.026785   12512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:23:18.026820   12512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:23:18.041280   12512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36089
	I0528 20:23:18.041700   12512 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:23:18.042173   12512 main.go:141] libmachine: Using API Version  1
	I0528 20:23:18.042194   12512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:23:18.042515   12512 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:23:18.042720   12512 main.go:141] libmachine: (addons-307023) Calling .GetState
	I0528 20:23:18.044344   12512 main.go:141] libmachine: (addons-307023) Calling .DriverName
	I0528 20:23:18.044577   12512 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0528 20:23:18.044598   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHHostname
	I0528 20:23:18.047208   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:18.047674   12512 main.go:141] libmachine: (addons-307023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c7:f9", ip: ""} in network mk-addons-307023: {Iface:virbr1 ExpiryTime:2024-05-28 21:22:31 +0000 UTC Type:0 Mac:52:54:00:40:c7:f9 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:addons-307023 Clientid:01:52:54:00:40:c7:f9}
	I0528 20:23:18.047700   12512 main.go:141] libmachine: (addons-307023) DBG | domain addons-307023 has defined IP address 192.168.39.230 and MAC address 52:54:00:40:c7:f9 in network mk-addons-307023
	I0528 20:23:18.047896   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHPort
	I0528 20:23:18.048086   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHKeyPath
	I0528 20:23:18.048259   12512 main.go:141] libmachine: (addons-307023) Calling .GetSSHUsername
	I0528 20:23:18.048389   12512 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/addons-307023/id_rsa Username:docker}
	I0528 20:23:18.247959   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:18.757391   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:19.247239   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:19.775262   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:19.786311   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.30926316s)
	I0528 20:23:19.786367   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.786382   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.786389   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.147087587s)
	I0528 20:23:19.786433   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.786453   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.786459   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.951966101s)
	I0528 20:23:19.786482   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.786492   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.786743   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.786761   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.786771   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.786765   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.471539652s)
	I0528 20:23:19.786780   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	W0528 20:23:19.786811   12512 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0528 20:23:19.786841   12512 retry.go:31] will retry after 201.548356ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0528 20:23:19.786901   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.347907855s)
	I0528 20:23:19.786919   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.786928   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.787041   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.787067   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.565646841s)
	I0528 20:23:19.787078   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.787086   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.787088   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.787095   12512 addons.go:475] Verifying addon ingress=true in "addons-307023"
	I0528 20:23:19.787134   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.790000   12512 out.go:177] * Verifying ingress addon...
	I0528 20:23:19.787161   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.787178   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.787186   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.787200   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.787224   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.787098   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.791511   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.791527   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.791528   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.791538   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.791544   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.791552   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.791541   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.791618   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.791607   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.791797   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.791798   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.791826   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.791828   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.791833   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.791836   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.791845   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:19.791852   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:19.791850   12512 addons.go:475] Verifying addon metrics-server=true in "addons-307023"
	I0528 20:23:19.791916   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.791948   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.791956   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.794705   12512 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-307023 service yakd-dashboard -n yakd-dashboard
	
	I0528 20:23:19.792100   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.792120   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.792126   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:19.792144   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:19.792481   12512 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0528 20:23:19.796102   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.796134   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:19.812456   12512 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0528 20:23:19.812479   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:19.989241   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 20:23:20.246734   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:20.300461   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:20.759745   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:20.848521   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:21.016432   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.797633005s)
	I0528 20:23:21.016471   12512 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.971878962s)
	I0528 20:23:21.016488   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:21.016502   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:21.018267   12512 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 20:23:21.016440   12512 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.960603649s)
	I0528 20:23:21.016854   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:21.016896   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:21.019698   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:21.019710   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:21.019722   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:21.019719   12512 api_server.go:72] duration metric: took 10.59091718s to wait for apiserver process to appear ...
	I0528 20:23:21.019736   12512 api_server.go:88] waiting for apiserver healthz status ...
	I0528 20:23:21.019763   12512 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8443/healthz ...
	I0528 20:23:21.021343   12512 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0528 20:23:21.019954   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:21.019984   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:21.023213   12512 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0528 20:23:21.023222   12512 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0528 20:23:21.023248   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:21.023267   12512 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-307023"
	I0528 20:23:21.024737   12512 out.go:177] * Verifying csi-hostpath-driver addon...
	I0528 20:23:21.026730   12512 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0528 20:23:21.031083   12512 api_server.go:279] https://192.168.39.230:8443/healthz returned 200:
	ok
	I0528 20:23:21.032068   12512 api_server.go:141] control plane version: v1.30.1
	I0528 20:23:21.032084   12512 api_server.go:131] duration metric: took 12.337642ms to wait for apiserver health ...
	I0528 20:23:21.032091   12512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 20:23:21.048862   12512 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0528 20:23:21.048891   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:21.049187   12512 system_pods.go:59] 19 kube-system pods found
	I0528 20:23:21.049218   12512 system_pods.go:61] "coredns-7db6d8ff4d-hmjmn" [805eb200-abef-49e1-b441-570367fec5ad] Running
	I0528 20:23:21.049229   12512 system_pods.go:61] "coredns-7db6d8ff4d-p4qdk" [96cce9c7-26e9-4430-80e9-194c4a5c5dda] Running
	I0528 20:23:21.049240   12512 system_pods.go:61] "csi-hostpath-attacher-0" [b16bd4e8-843e-4529-9ba9-dce28f647e6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0528 20:23:21.049251   12512 system_pods.go:61] "csi-hostpath-resizer-0" [3b1d8f2b-28d1-4af5-ac5a-5b6f25719826] Pending
	I0528 20:23:21.049269   12512 system_pods.go:61] "csi-hostpathplugin-hlrts" [5595c3ca-5a1c-4c3c-9647-413836e28765] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0528 20:23:21.049286   12512 system_pods.go:61] "etcd-addons-307023" [af3e376d-a2c2-4316-ab43-053ff7264a31] Running
	I0528 20:23:21.049300   12512 system_pods.go:61] "kube-apiserver-addons-307023" [9e657315-1dc1-497d-95a9-dc4bd6d39d63] Running
	I0528 20:23:21.049308   12512 system_pods.go:61] "kube-controller-manager-addons-307023" [bca735b7-0408-4bab-90f4-0a4119c53722] Running
	I0528 20:23:21.049316   12512 system_pods.go:61] "kube-ingress-dns-minikube" [1f7b4e7c-b982-4c04-add7-525795548760] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0528 20:23:21.049324   12512 system_pods.go:61] "kube-proxy-zm9r7" [02de5251-8d15-4ee9-b99b-978c02f4f9c5] Running
	I0528 20:23:21.049334   12512 system_pods.go:61] "kube-scheduler-addons-307023" [98fe07e8-5d59-46a1-a938-37a1b030c5f5] Running
	I0528 20:23:21.049344   12512 system_pods.go:61] "metrics-server-c59844bb4-wjvkg" [a9aa82de-329c-4c74-bdc0-f304386c8ede] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 20:23:21.049356   12512 system_pods.go:61] "nvidia-device-plugin-daemonset-fw58d" [9a054b41-fa5f-4c2b-bac0-5e8f84e8860f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0528 20:23:21.049368   12512 system_pods.go:61] "registry-g8f66" [d44205f8-5d8f-4cb5-86a9-a06ec1a83ab3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0528 20:23:21.049390   12512 system_pods.go:61] "registry-proxy-6v96c" [c226957d-d70d-48ff-85a3-d800697e600d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0528 20:23:21.049412   12512 system_pods.go:61] "snapshot-controller-745499f584-hj8gg" [d46d2593-66fb-4cb3-a416-cd8c60b6e4df] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0528 20:23:21.049423   12512 system_pods.go:61] "snapshot-controller-745499f584-p8v2q" [45292627-a14e-42c3-8c4b-77094065b3de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0528 20:23:21.049431   12512 system_pods.go:61] "storage-provisioner" [91636457-a3cb-48a7-bfd4-58907cb354d4] Running
	I0528 20:23:21.049439   12512 system_pods.go:61] "tiller-deploy-6677d64bcd-9kf86" [2e6adf96-5773-4664-abee-77443509067d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0528 20:23:21.049449   12512 system_pods.go:74] duration metric: took 17.352384ms to wait for pod list to return data ...
	I0528 20:23:21.049463   12512 default_sa.go:34] waiting for default service account to be created ...
	I0528 20:23:21.073140   12512 default_sa.go:45] found service account: "default"
	I0528 20:23:21.073161   12512 default_sa.go:55] duration metric: took 23.688228ms for default service account to be created ...
	I0528 20:23:21.073168   12512 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 20:23:21.092080   12512 system_pods.go:86] 19 kube-system pods found
	I0528 20:23:21.092117   12512 system_pods.go:89] "coredns-7db6d8ff4d-hmjmn" [805eb200-abef-49e1-b441-570367fec5ad] Running
	I0528 20:23:21.092127   12512 system_pods.go:89] "coredns-7db6d8ff4d-p4qdk" [96cce9c7-26e9-4430-80e9-194c4a5c5dda] Running
	I0528 20:23:21.092137   12512 system_pods.go:89] "csi-hostpath-attacher-0" [b16bd4e8-843e-4529-9ba9-dce28f647e6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0528 20:23:21.092164   12512 system_pods.go:89] "csi-hostpath-resizer-0" [3b1d8f2b-28d1-4af5-ac5a-5b6f25719826] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0528 20:23:21.092182   12512 system_pods.go:89] "csi-hostpathplugin-hlrts" [5595c3ca-5a1c-4c3c-9647-413836e28765] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0528 20:23:21.092190   12512 system_pods.go:89] "etcd-addons-307023" [af3e376d-a2c2-4316-ab43-053ff7264a31] Running
	I0528 20:23:21.092201   12512 system_pods.go:89] "kube-apiserver-addons-307023" [9e657315-1dc1-497d-95a9-dc4bd6d39d63] Running
	I0528 20:23:21.092211   12512 system_pods.go:89] "kube-controller-manager-addons-307023" [bca735b7-0408-4bab-90f4-0a4119c53722] Running
	I0528 20:23:21.092223   12512 system_pods.go:89] "kube-ingress-dns-minikube" [1f7b4e7c-b982-4c04-add7-525795548760] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0528 20:23:21.092239   12512 system_pods.go:89] "kube-proxy-zm9r7" [02de5251-8d15-4ee9-b99b-978c02f4f9c5] Running
	I0528 20:23:21.092251   12512 system_pods.go:89] "kube-scheduler-addons-307023" [98fe07e8-5d59-46a1-a938-37a1b030c5f5] Running
	I0528 20:23:21.092269   12512 system_pods.go:89] "metrics-server-c59844bb4-wjvkg" [a9aa82de-329c-4c74-bdc0-f304386c8ede] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 20:23:21.092282   12512 system_pods.go:89] "nvidia-device-plugin-daemonset-fw58d" [9a054b41-fa5f-4c2b-bac0-5e8f84e8860f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0528 20:23:21.092294   12512 system_pods.go:89] "registry-g8f66" [d44205f8-5d8f-4cb5-86a9-a06ec1a83ab3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0528 20:23:21.092306   12512 system_pods.go:89] "registry-proxy-6v96c" [c226957d-d70d-48ff-85a3-d800697e600d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0528 20:23:21.092317   12512 system_pods.go:89] "snapshot-controller-745499f584-hj8gg" [d46d2593-66fb-4cb3-a416-cd8c60b6e4df] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0528 20:23:21.092328   12512 system_pods.go:89] "snapshot-controller-745499f584-p8v2q" [45292627-a14e-42c3-8c4b-77094065b3de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0528 20:23:21.092338   12512 system_pods.go:89] "storage-provisioner" [91636457-a3cb-48a7-bfd4-58907cb354d4] Running
	I0528 20:23:21.092349   12512 system_pods.go:89] "tiller-deploy-6677d64bcd-9kf86" [2e6adf96-5773-4664-abee-77443509067d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0528 20:23:21.092360   12512 system_pods.go:126] duration metric: took 19.18558ms to wait for k8s-apps to be running ...
	I0528 20:23:21.092374   12512 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 20:23:21.092423   12512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:23:21.125643   12512 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0528 20:23:21.125668   12512 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0528 20:23:21.175294   12512 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0528 20:23:21.175317   12512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0528 20:23:21.250803   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:21.252824   12512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0528 20:23:21.301200   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:21.533176   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:21.747364   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:21.803484   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:22.034172   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:22.123594   12512 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.031141088s)
	I0528 20:23:22.123630   12512 system_svc.go:56] duration metric: took 1.031254848s WaitForService to wait for kubelet
	I0528 20:23:22.123637   12512 kubeadm.go:576] duration metric: took 11.694839227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:23:22.123655   12512 node_conditions.go:102] verifying NodePressure condition ...
	I0528 20:23:22.123938   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.134639499s)
	I0528 20:23:22.124005   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:22.124024   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:22.124330   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:22.124399   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:22.124414   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:22.124428   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:22.124439   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:22.125327   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:22.125356   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:22.125373   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:22.127236   12512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 20:23:22.127262   12512 node_conditions.go:123] node cpu capacity is 2
	I0528 20:23:22.127274   12512 node_conditions.go:105] duration metric: took 3.614226ms to run NodePressure ...
	I0528 20:23:22.127288   12512 start.go:240] waiting for startup goroutines ...
	I0528 20:23:22.247379   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:22.301087   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:22.540831   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:22.758244   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:22.783448   12512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.530586876s)
	I0528 20:23:22.783510   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:22.783525   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:22.783825   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:22.783838   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:22.783845   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:22.783855   12512 main.go:141] libmachine: Making call to close driver server
	I0528 20:23:22.783863   12512 main.go:141] libmachine: (addons-307023) Calling .Close
	I0528 20:23:22.784109   12512 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:23:22.784160   12512 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:23:22.784140   12512 main.go:141] libmachine: (addons-307023) DBG | Closing plugin on server side
	I0528 20:23:22.785281   12512 addons.go:475] Verifying addon gcp-auth=true in "addons-307023"
	I0528 20:23:22.786985   12512 out.go:177] * Verifying gcp-auth addon...
	I0528 20:23:22.789277   12512 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0528 20:23:22.820039   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:22.833349   12512 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0528 20:23:22.833376   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:23.032406   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:23.249396   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:23.295025   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:23.304642   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:23.531692   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:23.746736   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:23.793438   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:23.799588   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:24.031727   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:24.246444   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:24.293020   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:24.300117   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:24.533635   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:24.747410   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:24.793517   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:24.800062   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:25.033625   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:25.247227   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:25.293599   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:25.300270   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:25.532832   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:25.747672   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:25.792995   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:25.802024   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:26.032296   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:26.247028   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:26.294932   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:26.300310   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:26.618928   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:26.746812   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:26.794004   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:26.800545   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:27.032170   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:27.246753   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:27.292418   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:27.300268   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:27.533968   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:27.746417   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:27.794077   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:27.801539   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:28.031793   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:28.248817   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:28.293444   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:28.299620   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:28.532373   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:28.747756   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:28.792575   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:28.800233   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:29.032153   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:29.247541   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:29.293639   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:29.299631   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:29.532535   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:29.748755   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:29.793461   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:29.799362   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:30.034083   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:30.247320   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:30.293494   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:30.299878   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:30.531812   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:30.748423   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:30.793875   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:30.799689   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:31.032425   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:31.248326   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:31.293269   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:31.301083   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:31.532665   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:31.748277   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:31.794798   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:31.802919   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:32.032257   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:32.247021   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:32.292544   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:32.300006   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:32.532576   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:32.747388   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:32.793822   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:32.800298   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:33.033127   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:33.246841   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:33.293639   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:33.300018   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:33.533819   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:33.748051   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:33.793562   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:33.799601   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:34.031898   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:34.246538   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:34.293276   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:34.302411   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:34.534869   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:35.074731   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:35.077352   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:35.077609   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:35.078038   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:35.247209   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:35.293996   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:35.300397   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:35.532507   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:35.746181   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:35.792838   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:35.800254   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:36.032290   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:36.247032   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:36.292761   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:36.300411   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:36.532632   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:36.756098   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:36.839908   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:36.841263   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:37.208432   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:37.247160   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:37.293327   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:37.300973   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:37.533445   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:37.746806   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:37.792680   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:37.799609   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:38.036747   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:38.250834   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:38.293113   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:38.300086   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:38.532890   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:38.745963   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:38.792747   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:38.800181   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:39.032507   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:39.247241   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:39.293460   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:39.299894   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:39.533582   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:39.747135   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:39.792613   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:39.799829   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:40.032544   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:40.247078   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:40.292654   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:40.300059   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:40.532296   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:40.747338   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:40.792800   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:40.800651   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:41.034756   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:41.246819   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:41.293549   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:41.300048   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:41.531953   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:41.747243   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:41.792738   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:41.800149   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:42.033384   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:42.248288   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:42.324350   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:42.327631   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:42.531529   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:42.747515   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:42.793065   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:42.800621   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:43.032261   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:43.247382   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:43.294451   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:43.300122   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:43.532479   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:43.747415   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:43.793229   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:43.800721   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:44.032603   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:44.246433   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:44.293617   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:44.301331   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:44.534585   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:44.747838   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:44.792855   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:44.811757   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:45.033538   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:45.246703   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:45.293257   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:45.300777   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:45.536290   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:45.747297   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:45.793011   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:45.801296   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:46.032099   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:46.246870   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:46.293119   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:46.300381   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:46.532654   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:46.759549   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:46.794263   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:46.802282   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:47.031867   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:47.691122   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:47.693189   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:47.697034   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:47.697420   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:47.747438   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:47.792690   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:47.799816   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:48.032865   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:48.246245   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:48.293522   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:48.299813   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:48.531993   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:48.746665   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:48.793382   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:48.799754   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:49.032569   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:49.246967   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:49.293501   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:49.300428   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:49.532336   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:49.746874   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:49.794315   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:49.802671   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:50.033162   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:50.246485   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:50.293576   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:50.300276   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:50.533141   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:50.746879   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:50.792933   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:50.800641   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:51.032283   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:51.246760   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:51.292384   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:51.299275   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:51.539650   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:51.747066   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:51.792832   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:51.799945   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:52.032531   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:52.249997   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:52.293128   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:52.301514   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:52.532416   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:52.747431   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:52.793471   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:52.799719   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:53.032415   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:53.247347   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:53.293187   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:53.300393   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:53.534536   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:53.747368   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:53.793491   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:53.800038   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:54.033071   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:54.247960   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:54.292740   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:54.300241   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:54.536093   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:54.746251   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:54.793282   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:54.800531   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:55.032279   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:55.247255   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:23:55.293564   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:55.300068   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:55.532970   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:55.747106   12512 kapi.go:107] duration metric: took 40.004999177s to wait for kubernetes.io/minikube-addons=registry ...
	I0528 20:23:55.792967   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:55.800577   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:56.033059   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:56.293920   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:56.300772   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:56.532844   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:56.793892   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:56.800356   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:57.032569   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:57.295057   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:57.300965   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:57.533021   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:58.016939   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:58.017877   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:58.031623   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:58.294201   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:58.300742   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:58.533403   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:58.792877   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:58.802336   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:59.036272   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:59.292892   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:59.306951   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:23:59.531943   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:23:59.792922   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:23:59.800072   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:00.032901   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:00.292561   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:00.299849   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:00.532538   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:00.794023   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:00.803706   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:01.032511   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:01.293520   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:01.300080   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:01.533182   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:01.793876   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:01.800092   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:02.032464   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:02.292853   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:02.312156   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:02.533177   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:02.792910   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:02.800036   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:03.034629   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:03.293381   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:03.299886   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:03.533568   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:03.793345   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:03.799612   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:04.032491   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:04.348417   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:04.351673   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:04.531946   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:04.792845   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:04.802137   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:05.032712   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:05.293319   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:05.301182   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:05.531863   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:05.793047   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:05.801234   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:06.031986   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:06.292870   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:06.300463   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:06.533029   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:06.792678   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:06.800075   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:07.032966   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:07.292719   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:07.300558   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:07.532018   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:07.792358   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:07.799596   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:08.032612   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:08.296004   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:08.300298   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:08.663981   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:08.793608   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:08.800055   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:09.033700   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:09.293102   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:09.300679   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:09.532549   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:09.793007   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:09.800426   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:10.033215   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:10.292810   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:10.300155   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:10.532677   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:10.793238   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:10.800572   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:11.032528   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:11.293265   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:11.300932   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:11.533708   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:11.793191   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:11.800673   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:12.032635   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:12.458278   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:12.458565   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:12.540137   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:12.792699   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:12.800206   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:13.033128   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:13.294010   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:13.300429   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:13.533076   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:13.792132   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:13.800445   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:14.265535   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:14.295347   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:14.301157   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:14.532300   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:14.792938   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:14.800402   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:15.036743   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:15.293338   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:15.301060   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:15.532762   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:15.793248   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:15.799838   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:16.032469   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:16.293417   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:16.299692   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:16.534033   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:16.793267   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:16.800958   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:17.031867   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:17.293002   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:17.300559   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:17.534381   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:17.793397   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:17.799674   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:18.035982   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:18.293362   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:18.301234   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:18.532613   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:18.793651   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:18.799855   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:19.032285   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:19.293251   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:19.300747   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:19.532934   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:19.793035   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:19.800146   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:20.032834   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:20.293187   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:20.301002   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:20.531758   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:20.792566   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:20.799813   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:21.032052   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:21.292952   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:21.300903   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:21.533525   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:21.795069   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:21.801159   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:22.033177   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:22.293381   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:22.304485   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:22.532079   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:22.792115   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:22.800865   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:23.033016   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:23.293718   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:23.300364   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:23.534522   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:23.796844   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:23.808157   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:24.037080   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:24.293022   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:24.300357   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:24.532748   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:24.793869   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:24.799951   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:25.032397   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:25.292930   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:25.300629   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:25.532667   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:25.794617   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:25.800810   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:26.032555   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:26.293069   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:26.307137   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:26.537354   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:26.793537   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:26.800503   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:27.032281   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:27.298353   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:27.301569   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:27.531721   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:27.794543   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:27.800374   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:28.040512   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:28.292768   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:28.300432   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:28.534684   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:28.794907   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:28.802809   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:29.033815   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:29.293418   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:29.301085   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:29.532455   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:29.793249   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:29.802694   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:30.032712   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:30.595748   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:30.596674   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:30.599440   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:30.793036   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:30.800373   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:31.037660   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:31.293465   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:31.299558   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:31.532874   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:31.792543   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:31.800215   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:32.032426   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:32.294702   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:32.302778   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:32.533941   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:32.793906   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:32.799268   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:33.031898   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:33.295132   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:33.300645   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:33.533370   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:33.795073   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:33.801748   12512 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:24:34.032050   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:34.294460   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:34.299328   12512 kapi.go:107] duration metric: took 1m14.506847113s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0528 20:24:34.532579   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:34.794014   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:35.032225   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:35.293845   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:35.532835   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:35.793381   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:36.032823   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:36.293157   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:36.532279   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:36.793024   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:37.034638   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:37.293251   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:37.532496   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:37.794951   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:24:38.034377   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:38.293068   12512 kapi.go:107] duration metric: took 1m15.503786906s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0528 20:24:38.294794   12512 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-307023 cluster.
	I0528 20:24:38.296247   12512 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0528 20:24:38.297551   12512 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0528 20:24:38.532566   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:39.033654   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:39.532193   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:40.031824   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:40.532663   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:41.032846   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:41.540537   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:42.032689   12512 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:24:42.532771   12512 kapi.go:107] duration metric: took 1m21.50603798s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0528 20:24:42.534638   12512 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, inspektor-gadget, yakd, helm-tiller, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0528 20:24:42.535872   12512 addons.go:510] duration metric: took 1m32.107045068s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner storage-provisioner-rancher metrics-server inspektor-gadget yakd helm-tiller volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0528 20:24:42.535913   12512 start.go:245] waiting for cluster config update ...
	I0528 20:24:42.535929   12512 start.go:254] writing updated cluster config ...
	I0528 20:24:42.536249   12512 ssh_runner.go:195] Run: rm -f paused
	I0528 20:24:42.587087   12512 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 20:24:42.589127   12512 out.go:177] * Done! kubectl is now configured to use "addons-307023" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.321242351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716928242321218708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584528,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e32c46a4-99ed-4912-b990-baff96731a81 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.321704722Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=313a5b24-3811-495e-ab47-619d13505414 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.321762087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=313a5b24-3811-495e-ab47-619d13505414 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.322074436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44e852d22ba1819ddf0ebf3d0807c81eb34686152ec8c0d2d28504332322f910,PodSandboxId:bf42055627c5f778e09c307e0d382b8cd97e98b2436484afb7ff68aaa6095122,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716928061153443230,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-rrlcz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e12ad3-6c87-49f8-b726-fe0fbe89ae4a,},Annotations:map[string]string{io.kubernetes.container.hash: 6ff1a144,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25b089b47c42d9c78f20b6b9c92fae06be840b62420d791663b1b24bc7309f5,PodSandboxId:0763ad6c0dc6d8501a1652e1f80c871817876837137898af4b5567b1887c73da,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716927918822003784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 827204ae-e6ad-4624-87ec-f215a8cd56dd,},Annotations:map[string]string{io.kubern
etes.container.hash: a5c3bd23,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b252fd37aa9f5d12c758c9f92d1d30b108d86442b0cc874ea70f6bbcb4652fd,PodSandboxId:cf610ed316048dec99893c84336ba42640572f6ea101da9f9c37e8a3027e281b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716927889296922002,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-jtz8c,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: b1e41c1e-f373-4b51-9cf7-70350652cb99,},Annotations:map[string]string{io.kubernetes.container.hash: 737cf372,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea96afa17ea5229968ae5126d5b87ebacc473587a5b9b920907f3b088d2db71b,PodSandboxId:cbafe4ece952b44f2d401289fbd0398cb5d2750747349300574a5cf49a56c635,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716927878035455173,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9zg48,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 549f8b18-adb3-46d7-b9d6-66982b3a6ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 8848d3f9,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8b8e68eeb825dbfe1b66ff7ec29f0c1031fe1dcf332aa48ad22deb75ffb888,PodSandboxId:83d262bedf4d9e8087c85dcc670607f92c478fd2eaaf04bc77071634c2e71df1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17169
27850307483582,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-wpxcg,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a9c0d228-38f8-4c7f-99d8-bd87c9f25ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 55c66455,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355a251e3cf31eb25dddce88b95ad2c82fec40efb40b26bf7d9a5ecbe490c54d,PodSandboxId:adc08090883c6d3e03290eecc1f5dc06e0ba00cc5efb1a80b0cc621418111219,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716927841699867655,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-wjvkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa82de-329c-4c74-bdc0-f304386c8ede,},Annotations:map[string]string{io.kubernetes.container.hash: 36569b19,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c00a4fc421b3f7ba35f835a5ffb3586ed187a922a41a09e7d62310f40f28b4a,PodSandboxId:9c0d3246c219251ca87a4a7ec1763e4dd2e73259bee9e42c74b7c5be81800259,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716927797215259911,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91636457-a3cb-48a7-bfd4-58907cb354d4,},Annotations:map[string]string{io.kubernetes.container.hash: cc7bb4e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fc52623b4367389165518a9a3ad68d3a6dfd0bea62fee6e8dd4c68333d1d46,PodSandboxId:7c5c0b887193c90d66def7ae7eb242acb7366790eb1927bd6fa5dcbb3ed48e17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716927793886249131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmjmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805eb200-abef-49e1-b441-570367fec5ad,},Annotations:map[string]string{io.kubernetes.container.hash: f422ecb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee29a48aa62269f9e1e3af0e1a3bcef0cc8ba7014553ab82c93aaf128c2b3c2d,PodSan
dboxId:2495266d97bb1d5cb4f6cd6f29256ca074665a7c52909c4e509b0dfb148e7f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716927791278495326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zm9r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02de5251-8d15-4ee9-b99b-978c02f4f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: c31dc7d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3a1afe43af2cdcd45e85e5e47b2e24cc33c6f39bd25773f3d79aa3320f7d35,PodSandboxId:fc0fa40ac15ca4b2d833bff5a2
9b6698162423e2e5cf82e008671835e01bb941,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716927770428301949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4386d3c845bcc94595a3690ec06fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 7722ad34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d2d00755e2d2aa085fc127e00bff5cae198357367f1612c5a00654687283113,PodSandboxId:0881e09a01ed0effac1090fa172d7d93d343d9b37961684879dfd734a60797c7,Metadata
:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716927770398708161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f11cf3c88298cfc595782890de812176,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a2635bc2ea98051fa0060e6982cdbc878403ecd774b69a9700979890c020db,PodSandboxId:780d40c6d09cdb0796a0b522be0886c63e7d6f19d831fdbb49f94404e4680173,Metadata:&ContainerMetada
ta{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716927770369120842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ce70e57fa105e011ca6bdbe769de6c,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ea69bdbd09c4bf83134ec7a0266e2b89e83fba1e72332118c62196e4802c2fd,PodSandboxId:9c890007588d6da51a68d5745609454c364d5ae51919f8cc5cc222a0de66a20e,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716927770361963110,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9329d3d3dd989369304d748209ebae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6f4bb928,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=313a5b24-3811-495e-ab47-619d13505414 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.360411070Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4dd080a4-d3bb-45a8-85f6-e7da5f1d44ed name=/runtime.v1.RuntimeService/Version
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.360479150Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4dd080a4-d3bb-45a8-85f6-e7da5f1d44ed name=/runtime.v1.RuntimeService/Version
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.362126579Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ae9c2d3-30f1-4c32-9370-6897fb2678d9 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.363429937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716928242363404844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584528,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ae9c2d3-30f1-4c32-9370-6897fb2678d9 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.363926144Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b37b2cf9-2dfc-4c0d-aad6-9e257c3cc959 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.363980037Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b37b2cf9-2dfc-4c0d-aad6-9e257c3cc959 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.364254337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44e852d22ba1819ddf0ebf3d0807c81eb34686152ec8c0d2d28504332322f910,PodSandboxId:bf42055627c5f778e09c307e0d382b8cd97e98b2436484afb7ff68aaa6095122,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716928061153443230,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-rrlcz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e12ad3-6c87-49f8-b726-fe0fbe89ae4a,},Annotations:map[string]string{io.kubernetes.container.hash: 6ff1a144,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25b089b47c42d9c78f20b6b9c92fae06be840b62420d791663b1b24bc7309f5,PodSandboxId:0763ad6c0dc6d8501a1652e1f80c871817876837137898af4b5567b1887c73da,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716927918822003784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 827204ae-e6ad-4624-87ec-f215a8cd56dd,},Annotations:map[string]string{io.kubern
etes.container.hash: a5c3bd23,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b252fd37aa9f5d12c758c9f92d1d30b108d86442b0cc874ea70f6bbcb4652fd,PodSandboxId:cf610ed316048dec99893c84336ba42640572f6ea101da9f9c37e8a3027e281b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716927889296922002,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-jtz8c,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: b1e41c1e-f373-4b51-9cf7-70350652cb99,},Annotations:map[string]string{io.kubernetes.container.hash: 737cf372,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea96afa17ea5229968ae5126d5b87ebacc473587a5b9b920907f3b088d2db71b,PodSandboxId:cbafe4ece952b44f2d401289fbd0398cb5d2750747349300574a5cf49a56c635,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716927878035455173,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9zg48,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 549f8b18-adb3-46d7-b9d6-66982b3a6ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 8848d3f9,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8b8e68eeb825dbfe1b66ff7ec29f0c1031fe1dcf332aa48ad22deb75ffb888,PodSandboxId:83d262bedf4d9e8087c85dcc670607f92c478fd2eaaf04bc77071634c2e71df1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17169
27850307483582,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-wpxcg,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a9c0d228-38f8-4c7f-99d8-bd87c9f25ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 55c66455,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355a251e3cf31eb25dddce88b95ad2c82fec40efb40b26bf7d9a5ecbe490c54d,PodSandboxId:adc08090883c6d3e03290eecc1f5dc06e0ba00cc5efb1a80b0cc621418111219,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716927841699867655,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-wjvkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa82de-329c-4c74-bdc0-f304386c8ede,},Annotations:map[string]string{io.kubernetes.container.hash: 36569b19,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c00a4fc421b3f7ba35f835a5ffb3586ed187a922a41a09e7d62310f40f28b4a,PodSandboxId:9c0d3246c219251ca87a4a7ec1763e4dd2e73259bee9e42c74b7c5be81800259,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716927797215259911,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91636457-a3cb-48a7-bfd4-58907cb354d4,},Annotations:map[string]string{io.kubernetes.container.hash: cc7bb4e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fc52623b4367389165518a9a3ad68d3a6dfd0bea62fee6e8dd4c68333d1d46,PodSandboxId:7c5c0b887193c90d66def7ae7eb242acb7366790eb1927bd6fa5dcbb3ed48e17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716927793886249131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmjmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805eb200-abef-49e1-b441-570367fec5ad,},Annotations:map[string]string{io.kubernetes.container.hash: f422ecb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee29a48aa62269f9e1e3af0e1a3bcef0cc8ba7014553ab82c93aaf128c2b3c2d,PodSan
dboxId:2495266d97bb1d5cb4f6cd6f29256ca074665a7c52909c4e509b0dfb148e7f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716927791278495326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zm9r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02de5251-8d15-4ee9-b99b-978c02f4f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: c31dc7d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3a1afe43af2cdcd45e85e5e47b2e24cc33c6f39bd25773f3d79aa3320f7d35,PodSandboxId:fc0fa40ac15ca4b2d833bff5a2
9b6698162423e2e5cf82e008671835e01bb941,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716927770428301949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4386d3c845bcc94595a3690ec06fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 7722ad34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d2d00755e2d2aa085fc127e00bff5cae198357367f1612c5a00654687283113,PodSandboxId:0881e09a01ed0effac1090fa172d7d93d343d9b37961684879dfd734a60797c7,Metadata
:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716927770398708161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f11cf3c88298cfc595782890de812176,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a2635bc2ea98051fa0060e6982cdbc878403ecd774b69a9700979890c020db,PodSandboxId:780d40c6d09cdb0796a0b522be0886c63e7d6f19d831fdbb49f94404e4680173,Metadata:&ContainerMetada
ta{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716927770369120842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ce70e57fa105e011ca6bdbe769de6c,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ea69bdbd09c4bf83134ec7a0266e2b89e83fba1e72332118c62196e4802c2fd,PodSandboxId:9c890007588d6da51a68d5745609454c364d5ae51919f8cc5cc222a0de66a20e,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716927770361963110,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9329d3d3dd989369304d748209ebae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6f4bb928,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b37b2cf9-2dfc-4c0d-aad6-9e257c3cc959 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.402709756Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24be885b-f43f-4b8b-b46c-7ce29a47216d name=/runtime.v1.RuntimeService/Version
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.402848296Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24be885b-f43f-4b8b-b46c-7ce29a47216d name=/runtime.v1.RuntimeService/Version
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.404243379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac5f9951-1c14-4183-a235-4ada84489043 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.405512044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716928242405486915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584528,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac5f9951-1c14-4183-a235-4ada84489043 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.406274772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7be5ac33-d4af-44ba-9e33-2da6fbd7bbf7 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.406329644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7be5ac33-d4af-44ba-9e33-2da6fbd7bbf7 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.406595848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44e852d22ba1819ddf0ebf3d0807c81eb34686152ec8c0d2d28504332322f910,PodSandboxId:bf42055627c5f778e09c307e0d382b8cd97e98b2436484afb7ff68aaa6095122,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716928061153443230,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-rrlcz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e12ad3-6c87-49f8-b726-fe0fbe89ae4a,},Annotations:map[string]string{io.kubernetes.container.hash: 6ff1a144,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25b089b47c42d9c78f20b6b9c92fae06be840b62420d791663b1b24bc7309f5,PodSandboxId:0763ad6c0dc6d8501a1652e1f80c871817876837137898af4b5567b1887c73da,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716927918822003784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 827204ae-e6ad-4624-87ec-f215a8cd56dd,},Annotations:map[string]string{io.kubern
etes.container.hash: a5c3bd23,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b252fd37aa9f5d12c758c9f92d1d30b108d86442b0cc874ea70f6bbcb4652fd,PodSandboxId:cf610ed316048dec99893c84336ba42640572f6ea101da9f9c37e8a3027e281b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716927889296922002,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-jtz8c,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: b1e41c1e-f373-4b51-9cf7-70350652cb99,},Annotations:map[string]string{io.kubernetes.container.hash: 737cf372,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea96afa17ea5229968ae5126d5b87ebacc473587a5b9b920907f3b088d2db71b,PodSandboxId:cbafe4ece952b44f2d401289fbd0398cb5d2750747349300574a5cf49a56c635,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716927878035455173,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9zg48,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 549f8b18-adb3-46d7-b9d6-66982b3a6ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 8848d3f9,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8b8e68eeb825dbfe1b66ff7ec29f0c1031fe1dcf332aa48ad22deb75ffb888,PodSandboxId:83d262bedf4d9e8087c85dcc670607f92c478fd2eaaf04bc77071634c2e71df1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17169
27850307483582,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-wpxcg,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a9c0d228-38f8-4c7f-99d8-bd87c9f25ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 55c66455,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355a251e3cf31eb25dddce88b95ad2c82fec40efb40b26bf7d9a5ecbe490c54d,PodSandboxId:adc08090883c6d3e03290eecc1f5dc06e0ba00cc5efb1a80b0cc621418111219,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716927841699867655,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-wjvkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa82de-329c-4c74-bdc0-f304386c8ede,},Annotations:map[string]string{io.kubernetes.container.hash: 36569b19,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c00a4fc421b3f7ba35f835a5ffb3586ed187a922a41a09e7d62310f40f28b4a,PodSandboxId:9c0d3246c219251ca87a4a7ec1763e4dd2e73259bee9e42c74b7c5be81800259,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716927797215259911,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91636457-a3cb-48a7-bfd4-58907cb354d4,},Annotations:map[string]string{io.kubernetes.container.hash: cc7bb4e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fc52623b4367389165518a9a3ad68d3a6dfd0bea62fee6e8dd4c68333d1d46,PodSandboxId:7c5c0b887193c90d66def7ae7eb242acb7366790eb1927bd6fa5dcbb3ed48e17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716927793886249131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmjmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805eb200-abef-49e1-b441-570367fec5ad,},Annotations:map[string]string{io.kubernetes.container.hash: f422ecb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee29a48aa62269f9e1e3af0e1a3bcef0cc8ba7014553ab82c93aaf128c2b3c2d,PodSan
dboxId:2495266d97bb1d5cb4f6cd6f29256ca074665a7c52909c4e509b0dfb148e7f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716927791278495326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zm9r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02de5251-8d15-4ee9-b99b-978c02f4f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: c31dc7d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3a1afe43af2cdcd45e85e5e47b2e24cc33c6f39bd25773f3d79aa3320f7d35,PodSandboxId:fc0fa40ac15ca4b2d833bff5a2
9b6698162423e2e5cf82e008671835e01bb941,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716927770428301949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4386d3c845bcc94595a3690ec06fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 7722ad34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d2d00755e2d2aa085fc127e00bff5cae198357367f1612c5a00654687283113,PodSandboxId:0881e09a01ed0effac1090fa172d7d93d343d9b37961684879dfd734a60797c7,Metadata
:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716927770398708161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f11cf3c88298cfc595782890de812176,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a2635bc2ea98051fa0060e6982cdbc878403ecd774b69a9700979890c020db,PodSandboxId:780d40c6d09cdb0796a0b522be0886c63e7d6f19d831fdbb49f94404e4680173,Metadata:&ContainerMetada
ta{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716927770369120842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ce70e57fa105e011ca6bdbe769de6c,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ea69bdbd09c4bf83134ec7a0266e2b89e83fba1e72332118c62196e4802c2fd,PodSandboxId:9c890007588d6da51a68d5745609454c364d5ae51919f8cc5cc222a0de66a20e,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716927770361963110,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9329d3d3dd989369304d748209ebae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6f4bb928,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7be5ac33-d4af-44ba-9e33-2da6fbd7bbf7 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.451514252Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c7c4e07-be4a-455a-b458-38ea3ce6b8cc name=/runtime.v1.RuntimeService/Version
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.451630037Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c7c4e07-be4a-455a-b458-38ea3ce6b8cc name=/runtime.v1.RuntimeService/Version
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.452470737Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=368a24d0-2040-4d39-b0aa-0cc57642194b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.453618980Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716928242453594576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584528,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=368a24d0-2040-4d39-b0aa-0cc57642194b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.454280478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=346570e6-2d95-43cc-bae6-4488a19fe0b7 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.454336354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=346570e6-2d95-43cc-bae6-4488a19fe0b7 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:30:42 addons-307023 crio[678]: time="2024-05-28 20:30:42.454617900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44e852d22ba1819ddf0ebf3d0807c81eb34686152ec8c0d2d28504332322f910,PodSandboxId:bf42055627c5f778e09c307e0d382b8cd97e98b2436484afb7ff68aaa6095122,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1716928061153443230,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-rrlcz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 64e12ad3-6c87-49f8-b726-fe0fbe89ae4a,},Annotations:map[string]string{io.kubernetes.container.hash: 6ff1a144,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25b089b47c42d9c78f20b6b9c92fae06be840b62420d791663b1b24bc7309f5,PodSandboxId:0763ad6c0dc6d8501a1652e1f80c871817876837137898af4b5567b1887c73da,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4,State:CONTAINER_RUNNING,CreatedAt:1716927918822003784,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 827204ae-e6ad-4624-87ec-f215a8cd56dd,},Annotations:map[string]string{io.kubern
etes.container.hash: a5c3bd23,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b252fd37aa9f5d12c758c9f92d1d30b108d86442b0cc874ea70f6bbcb4652fd,PodSandboxId:cf610ed316048dec99893c84336ba42640572f6ea101da9f9c37e8a3027e281b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1716927889296922002,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-jtz8c,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: b1e41c1e-f373-4b51-9cf7-70350652cb99,},Annotations:map[string]string{io.kubernetes.container.hash: 737cf372,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea96afa17ea5229968ae5126d5b87ebacc473587a5b9b920907f3b088d2db71b,PodSandboxId:cbafe4ece952b44f2d401289fbd0398cb5d2750747349300574a5cf49a56c635,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1716927878035455173,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-9zg48,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 549f8b18-adb3-46d7-b9d6-66982b3a6ea6,},Annotations:map[string]string{io.kubernetes.container.hash: 8848d3f9,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8b8e68eeb825dbfe1b66ff7ec29f0c1031fe1dcf332aa48ad22deb75ffb888,PodSandboxId:83d262bedf4d9e8087c85dcc670607f92c478fd2eaaf04bc77071634c2e71df1,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17169
27850307483582,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-wpxcg,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a9c0d228-38f8-4c7f-99d8-bd87c9f25ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 55c66455,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355a251e3cf31eb25dddce88b95ad2c82fec40efb40b26bf7d9a5ecbe490c54d,PodSandboxId:adc08090883c6d3e03290eecc1f5dc06e0ba00cc5efb1a80b0cc621418111219,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1716927841699867655,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-wjvkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa82de-329c-4c74-bdc0-f304386c8ede,},Annotations:map[string]string{io.kubernetes.container.hash: 36569b19,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c00a4fc421b3f7ba35f835a5ffb3586ed187a922a41a09e7d62310f40f28b4a,PodSandboxId:9c0d3246c219251ca87a4a7ec1763e4dd2e73259bee9e42c74b7c5be81800259,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716927797215259911,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91636457-a3cb-48a7-bfd4-58907cb354d4,},Annotations:map[string]string{io.kubernetes.container.hash: cc7bb4e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fc52623b4367389165518a9a3ad68d3a6dfd0bea62fee6e8dd4c68333d1d46,PodSandboxId:7c5c0b887193c90d66def7ae7eb242acb7366790eb1927bd6fa5dcbb3ed48e17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716927793886249131,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmjmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 805eb200-abef-49e1-b441-570367fec5ad,},Annotations:map[string]string{io.kubernetes.container.hash: f422ecb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee29a48aa62269f9e1e3af0e1a3bcef0cc8ba7014553ab82c93aaf128c2b3c2d,PodSan
dboxId:2495266d97bb1d5cb4f6cd6f29256ca074665a7c52909c4e509b0dfb148e7f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716927791278495326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zm9r7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02de5251-8d15-4ee9-b99b-978c02f4f9c5,},Annotations:map[string]string{io.kubernetes.container.hash: c31dc7d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3a1afe43af2cdcd45e85e5e47b2e24cc33c6f39bd25773f3d79aa3320f7d35,PodSandboxId:fc0fa40ac15ca4b2d833bff5a2
9b6698162423e2e5cf82e008671835e01bb941,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716927770428301949,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac4386d3c845bcc94595a3690ec06fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 7722ad34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d2d00755e2d2aa085fc127e00bff5cae198357367f1612c5a00654687283113,PodSandboxId:0881e09a01ed0effac1090fa172d7d93d343d9b37961684879dfd734a60797c7,Metadata
:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716927770398708161,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f11cf3c88298cfc595782890de812176,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56a2635bc2ea98051fa0060e6982cdbc878403ecd774b69a9700979890c020db,PodSandboxId:780d40c6d09cdb0796a0b522be0886c63e7d6f19d831fdbb49f94404e4680173,Metadata:&ContainerMetada
ta{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716927770369120842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22ce70e57fa105e011ca6bdbe769de6c,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ea69bdbd09c4bf83134ec7a0266e2b89e83fba1e72332118c62196e4802c2fd,PodSandboxId:9c890007588d6da51a68d5745609454c364d5ae51919f8cc5cc222a0de66a20e,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716927770361963110,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-307023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9329d3d3dd989369304d748209ebae5,},Annotations:map[string]string{io.kubernetes.container.hash: 6f4bb928,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=346570e6-2d95-43cc-bae6-4488a19fe0b7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	44e852d22ba18       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 3 minutes ago       Running             hello-world-app           0                   bf42055627c5f       hello-world-app-86c47465fc-rrlcz
	b25b089b47c42       docker.io/library/nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00                         5 minutes ago       Running             nginx                     0                   0763ad6c0dc6d       nginx
	4b252fd37aa9f       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                   5 minutes ago       Running             headlamp                  0                   cf610ed316048       headlamp-68456f997b-jtz8c
	ea96afa17ea52       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   cbafe4ece952b       gcp-auth-5db96cd9b4-9zg48
	5f8b8e68eeb82       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         6 minutes ago       Running             yakd                      0                   83d262bedf4d9       yakd-dashboard-5ddbf7d777-wpxcg
	355a251e3cf31       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   adc08090883c6       metrics-server-c59844bb4-wjvkg
	5c00a4fc421b3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   9c0d3246c2192       storage-provisioner
	b5fc52623b436       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   7c5c0b887193c       coredns-7db6d8ff4d-hmjmn
	ee29a48aa6226       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                        7 minutes ago       Running             kube-proxy                0                   2495266d97bb1       kube-proxy-zm9r7
	1a3a1afe43af2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   fc0fa40ac15ca       etcd-addons-307023
	4d2d00755e2d2       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                        7 minutes ago       Running             kube-scheduler            0                   0881e09a01ed0       kube-scheduler-addons-307023
	56a2635bc2ea9       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                        7 minutes ago       Running             kube-controller-manager   0                   780d40c6d09cd       kube-controller-manager-addons-307023
	8ea69bdbd09c4       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                        7 minutes ago       Running             kube-apiserver            0                   9c890007588d6       kube-apiserver-addons-307023
	
	
	==> coredns [b5fc52623b4367389165518a9a3ad68d3a6dfd0bea62fee6e8dd4c68333d1d46] <==
	[INFO] 10.244.0.7:40244 - 576 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000146022s
	[INFO] 10.244.0.7:50777 - 57115 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000134875s
	[INFO] 10.244.0.7:50777 - 10521 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124428s
	[INFO] 10.244.0.7:40623 - 23829 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097082s
	[INFO] 10.244.0.7:40623 - 29719 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099101s
	[INFO] 10.244.0.7:57130 - 44038 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138694s
	[INFO] 10.244.0.7:57130 - 62980 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000124673s
	[INFO] 10.244.0.7:44285 - 57575 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000071153s
	[INFO] 10.244.0.7:44285 - 46817 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000073486s
	[INFO] 10.244.0.7:49042 - 12883 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061762s
	[INFO] 10.244.0.7:49042 - 13133 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067265s
	[INFO] 10.244.0.7:48451 - 44549 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046643s
	[INFO] 10.244.0.7:48451 - 32519 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049279s
	[INFO] 10.244.0.7:36430 - 27828 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000038772s
	[INFO] 10.244.0.7:36430 - 2230 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000068683s
	[INFO] 10.244.0.22:54672 - 6661 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000678621s
	[INFO] 10.244.0.22:44241 - 10797 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000466562s
	[INFO] 10.244.0.22:39358 - 29302 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107879s
	[INFO] 10.244.0.22:57973 - 60005 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106986s
	[INFO] 10.244.0.22:58356 - 10500 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000107302s
	[INFO] 10.244.0.22:49176 - 36376 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102899s
	[INFO] 10.244.0.22:54393 - 32442 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000602725s
	[INFO] 10.244.0.22:46509 - 35204 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000434545s
	[INFO] 10.244.0.25:59008 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000157994s
	[INFO] 10.244.0.25:52385 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000085538s
	
	
	==> describe nodes <==
	Name:               addons-307023
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-307023
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=addons-307023
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T20_22_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-307023
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:22:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-307023
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:30:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:28:01 +0000   Tue, 28 May 2024 20:22:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:28:01 +0000   Tue, 28 May 2024 20:22:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:28:01 +0000   Tue, 28 May 2024 20:22:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:28:01 +0000   Tue, 28 May 2024 20:22:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    addons-307023
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6c6949e5e2a4c13b4bc7ebf3ad315cb
	  System UUID:                e6c6949e-5e2a-4c13-b4bc-7ebf3ad315cb
	  Boot ID:                    166e0ee6-5851-451e-b967-057317e752a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-rrlcz         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  gcp-auth                    gcp-auth-5db96cd9b4-9zg48                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  headlamp                    headlamp-68456f997b-jtz8c                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 coredns-7db6d8ff4d-hmjmn                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m32s
	  kube-system                 etcd-addons-307023                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m47s
	  kube-system                 kube-apiserver-addons-307023             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m47s
	  kube-system                 kube-controller-manager-addons-307023    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m47s
	  kube-system                 kube-proxy-zm9r7                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-scheduler-addons-307023             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m47s
	  kube-system                 metrics-server-c59844bb4-wjvkg           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m26s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-wpxcg          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     7m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m30s  kube-proxy       
	  Normal  Starting                 7m47s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m47s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m47s  kubelet          Node addons-307023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m47s  kubelet          Node addons-307023 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m47s  kubelet          Node addons-307023 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m46s  kubelet          Node addons-307023 status is now: NodeReady
	  Normal  RegisteredNode           7m33s  node-controller  Node addons-307023 event: Registered Node addons-307023 in Controller
	
	
	==> dmesg <==
	[  +0.076472] kauditd_printk_skb: 69 callbacks suppressed
	[May28 20:23] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.244378] systemd-fstab-generator[1506]: Ignoring "noauto" option for root device
	[  +5.176616] kauditd_printk_skb: 117 callbacks suppressed
	[  +5.167531] kauditd_printk_skb: 109 callbacks suppressed
	[  +6.683851] kauditd_printk_skb: 98 callbacks suppressed
	[ +20.471985] kauditd_printk_skb: 2 callbacks suppressed
	[May28 20:24] kauditd_printk_skb: 25 callbacks suppressed
	[ +11.608759] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.399711] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.015499] kauditd_printk_skb: 109 callbacks suppressed
	[  +6.479386] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.691471] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.222895] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.725465] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.465413] kauditd_printk_skb: 25 callbacks suppressed
	[May28 20:25] kauditd_printk_skb: 65 callbacks suppressed
	[  +6.060728] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.536821] kauditd_printk_skb: 39 callbacks suppressed
	[ +24.679775] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.319226] kauditd_printk_skb: 3 callbacks suppressed
	[May28 20:26] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.378262] kauditd_printk_skb: 33 callbacks suppressed
	[May28 20:27] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.906639] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [1a3a1afe43af2cdcd45e85e5e47b2e24cc33c6f39bd25773f3d79aa3320f7d35] <==
	{"level":"info","ts":"2024-05-28T20:24:14.248901Z","caller":"traceutil/trace.go:171","msg":"trace[151110983] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:972; }","duration":"230.781876ms","start":"2024-05-28T20:24:14.018111Z","end":"2024-05-28T20:24:14.248893Z","steps":["trace[151110983] 'agreement among raft nodes before linearized reading'  (duration: 230.515632ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:24:14.249066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.483181ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T20:24:14.249103Z","caller":"traceutil/trace.go:171","msg":"trace[461391185] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:972; }","duration":"101.520706ms","start":"2024-05-28T20:24:14.147577Z","end":"2024-05-28T20:24:14.249097Z","steps":["trace[461391185] 'agreement among raft nodes before linearized reading'  (duration: 101.470851ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:24:30.581274Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.545956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-05-28T20:24:30.581319Z","caller":"traceutil/trace.go:171","msg":"trace[1614791197] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1113; }","duration":"300.618747ms","start":"2024-05-28T20:24:30.28069Z","end":"2024-05-28T20:24:30.581308Z","steps":["trace[1614791197] 'range keys from in-memory index tree'  (duration: 300.301105ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:24:30.581345Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T20:24:30.280675Z","time spent":"300.665386ms","remote":"127.0.0.1:40700","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-05-28T20:24:30.581496Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"294.667057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-05-28T20:24:30.581513Z","caller":"traceutil/trace.go:171","msg":"trace[1350744861] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1113; }","duration":"294.762269ms","start":"2024-05-28T20:24:30.286746Z","end":"2024-05-28T20:24:30.581508Z","steps":["trace[1350744861] 'range keys from in-memory index tree'  (duration: 294.588874ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T20:24:44.880672Z","caller":"traceutil/trace.go:171","msg":"trace[453788130] linearizableReadLoop","detail":"{readStateIndex:1254; appliedIndex:1253; }","duration":"193.155784ms","start":"2024-05-28T20:24:44.687503Z","end":"2024-05-28T20:24:44.880659Z","steps":["trace[453788130] 'read index received'  (duration: 191.852017ms)","trace[453788130] 'applied index is now lower than readState.Index'  (duration: 1.303212ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T20:24:44.881009Z","caller":"traceutil/trace.go:171","msg":"trace[1046907904] transaction","detail":"{read_only:false; response_revision:1215; number_of_response:1; }","duration":"217.485065ms","start":"2024-05-28T20:24:44.663513Z","end":"2024-05-28T20:24:44.880998Z","steps":["trace[1046907904] 'process raft request'  (duration: 217.057326ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:24:44.881212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.690887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-307023\" ","response":"range_response_count:1 size:9252"}
	{"level":"info","ts":"2024-05-28T20:24:44.881261Z","caller":"traceutil/trace.go:171","msg":"trace[921117727] range","detail":"{range_begin:/registry/minions/addons-307023; range_end:; response_count:1; response_revision:1215; }","duration":"193.773621ms","start":"2024-05-28T20:24:44.68748Z","end":"2024-05-28T20:24:44.881253Z","steps":["trace[921117727] 'agreement among raft nodes before linearized reading'  (duration: 193.658039ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T20:24:46.943387Z","caller":"traceutil/trace.go:171","msg":"trace[92720894] transaction","detail":"{read_only:false; response_revision:1223; number_of_response:1; }","duration":"103.478548ms","start":"2024-05-28T20:24:46.839891Z","end":"2024-05-28T20:24:46.943369Z","steps":["trace[92720894] 'process raft request'  (duration: 102.797236ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T20:24:49.164973Z","caller":"traceutil/trace.go:171","msg":"trace[2142571499] linearizableReadLoop","detail":"{readStateIndex:1267; appliedIndex:1266; }","duration":"193.958504ms","start":"2024-05-28T20:24:48.971Z","end":"2024-05-28T20:24:49.164959Z","steps":["trace[2142571499] 'read index received'  (duration: 193.748139ms)","trace[2142571499] 'applied index is now lower than readState.Index'  (duration: 209.933µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T20:24:49.165227Z","caller":"traceutil/trace.go:171","msg":"trace[1358655908] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"338.135244ms","start":"2024-05-28T20:24:48.827081Z","end":"2024-05-28T20:24:49.165216Z","steps":["trace[1358655908] 'process raft request'  (duration: 337.789491ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:24:49.165335Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T20:24:48.827065Z","time spent":"338.210444ms","remote":"127.0.0.1:40784","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-307023\" mod_revision:1159 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-307023\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-307023\" > >"}
	{"level":"warn","ts":"2024-05-28T20:24:49.165615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.633793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-05-28T20:24:49.165638Z","caller":"traceutil/trace.go:171","msg":"trace[474094034] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1228; }","duration":"194.677236ms","start":"2024-05-28T20:24:48.970954Z","end":"2024-05-28T20:24:49.165631Z","steps":["trace[474094034] 'agreement among raft nodes before linearized reading'  (duration: 194.611001ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:25:13.268879Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.925572ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7210984011906190305 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/storageclasses/local-path\" mod_revision:1432 > success:<request_delete_range:<key:\"/registry/storageclasses/local-path\" > > failure:<request_range:<key:\"/registry/storageclasses/local-path\" > >>","response":"size:18"}
	{"level":"info","ts":"2024-05-28T20:25:13.269033Z","caller":"traceutil/trace.go:171","msg":"trace[2044234968] linearizableReadLoop","detail":"{readStateIndex:1489; appliedIndex:1488; }","duration":"189.096349ms","start":"2024-05-28T20:25:13.079927Z","end":"2024-05-28T20:25:13.269023Z","steps":["trace[2044234968] 'read index received'  (duration: 18.578425ms)","trace[2044234968] 'applied index is now lower than readState.Index'  (duration: 170.516707ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T20:25:13.269241Z","caller":"traceutil/trace.go:171","msg":"trace[2040328284] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1439; }","duration":"203.206219ms","start":"2024-05-28T20:25:13.066024Z","end":"2024-05-28T20:25:13.269231Z","steps":["trace[2040328284] 'process raft request'  (duration: 32.48907ms)","trace[2040328284] 'compare'  (duration: 169.547651ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T20:25:13.269481Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.550248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/local-path-storage/local-path-provisioner-8d985888d\" ","response":"range_response_count:1 size:2711"}
	{"level":"info","ts":"2024-05-28T20:25:13.269527Z","caller":"traceutil/trace.go:171","msg":"trace[781804698] range","detail":"{range_begin:/registry/replicasets/local-path-storage/local-path-provisioner-8d985888d; range_end:; response_count:1; response_revision:1439; }","duration":"189.616201ms","start":"2024-05-28T20:25:13.079904Z","end":"2024-05-28T20:25:13.26952Z","steps":["trace[781804698] 'agreement among raft nodes before linearized reading'  (duration: 189.515402ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:25:13.269608Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.691594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T20:25:13.269639Z","caller":"traceutil/trace.go:171","msg":"trace[223807614] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1439; }","duration":"121.744922ms","start":"2024-05-28T20:25:13.147889Z","end":"2024-05-28T20:25:13.269634Z","steps":["trace[223807614] 'agreement among raft nodes before linearized reading'  (duration: 121.701636ms)"],"step_count":1}
	
	
	==> gcp-auth [ea96afa17ea5229968ae5126d5b87ebacc473587a5b9b920907f3b088d2db71b] <==
	2024/05/28 20:24:38 GCP Auth Webhook started!
	2024/05/28 20:24:43 Ready to marshal response ...
	2024/05/28 20:24:43 Ready to write response ...
	2024/05/28 20:24:43 Ready to marshal response ...
	2024/05/28 20:24:43 Ready to write response ...
	2024/05/28 20:24:43 Ready to marshal response ...
	2024/05/28 20:24:43 Ready to write response ...
	2024/05/28 20:24:53 Ready to marshal response ...
	2024/05/28 20:24:53 Ready to write response ...
	2024/05/28 20:24:53 Ready to marshal response ...
	2024/05/28 20:24:53 Ready to write response ...
	2024/05/28 20:24:59 Ready to marshal response ...
	2024/05/28 20:24:59 Ready to write response ...
	2024/05/28 20:25:00 Ready to marshal response ...
	2024/05/28 20:25:00 Ready to write response ...
	2024/05/28 20:25:12 Ready to marshal response ...
	2024/05/28 20:25:12 Ready to write response ...
	2024/05/28 20:25:14 Ready to marshal response ...
	2024/05/28 20:25:14 Ready to write response ...
	2024/05/28 20:25:37 Ready to marshal response ...
	2024/05/28 20:25:37 Ready to write response ...
	2024/05/28 20:26:05 Ready to marshal response ...
	2024/05/28 20:26:05 Ready to write response ...
	2024/05/28 20:27:37 Ready to marshal response ...
	2024/05/28 20:27:37 Ready to write response ...
	
	
	==> kernel <==
	 20:30:42 up 8 min,  0 users,  load average: 0.21, 0.66, 0.48
	Linux addons-307023 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8ea69bdbd09c4bf83134ec7a0266e2b89e83fba1e72332118c62196e4802c2fd] <==
	E0528 20:25:11.818054       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0528 20:25:11.818285       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.58.136:443/apis/metrics.k8s.io/v1beta1: Get "https://10.97.58.136:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0528 20:25:11.837591       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0528 20:25:11.847285       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0528 20:25:14.115699       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0528 20:25:14.304175       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.143.122"}
	E0528 20:25:28.437170       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0528 20:25:53.605341       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0528 20:26:22.529582       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 20:26:22.530068       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 20:26:22.588591       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 20:26:22.588648       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 20:26:22.598301       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 20:26:22.598383       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 20:26:22.603143       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 20:26:22.603197       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0528 20:26:22.639133       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0528 20:26:22.639189       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0528 20:26:23.599019       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0528 20:26:23.639427       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0528 20:26:23.648196       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0528 20:27:37.509945       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.139.198"}
	E0528 20:27:40.523917       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0528 20:27:42.967111       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [56a2635bc2ea98051fa0060e6982cdbc878403ecd774b69a9700979890c020db] <==
	W0528 20:28:37.697248       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:28:37.697361       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:29:07.705206       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:29:07.705310       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:29:12.830919       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:29:12.830968       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:29:17.962163       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:29:17.962258       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:29:24.762598       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:29:24.762694       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:29:56.044145       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:29:56.044404       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:30:00.970255       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:30:00.970332       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:30:03.136180       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:30:03.136419       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:30:10.940678       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:30:10.940740       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:30:31.860947       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:30:31.861057       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 20:30:35.751343       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:30:35.751446       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 20:30:41.419296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="11.754µs"
	W0528 20:30:42.308935       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 20:30:42.308982       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [ee29a48aa62269f9e1e3af0e1a3bcef0cc8ba7014553ab82c93aaf128c2b3c2d] <==
	I0528 20:23:12.124248       1 server_linux.go:69] "Using iptables proxy"
	I0528 20:23:12.139669       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.230"]
	I0528 20:23:12.237210       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 20:23:12.237256       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 20:23:12.237271       1 server_linux.go:165] "Using iptables Proxier"
	I0528 20:23:12.244041       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 20:23:12.244227       1 server.go:872] "Version info" version="v1.30.1"
	I0528 20:23:12.244250       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:23:12.252316       1 config.go:192] "Starting service config controller"
	I0528 20:23:12.252351       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 20:23:12.252376       1 config.go:101] "Starting endpoint slice config controller"
	I0528 20:23:12.252380       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 20:23:12.252699       1 config.go:319] "Starting node config controller"
	I0528 20:23:12.256929       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 20:23:12.353183       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 20:23:12.353230       1 shared_informer.go:320] Caches are synced for service config
	I0528 20:23:12.357178       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4d2d00755e2d2aa085fc127e00bff5cae198357367f1612c5a00654687283113] <==
	W0528 20:22:53.718607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0528 20:22:53.718659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0528 20:22:53.747926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 20:22:53.747975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0528 20:22:53.807729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 20:22:53.807888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0528 20:22:53.872691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 20:22:53.872740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 20:22:53.914964       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 20:22:53.915129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0528 20:22:53.928671       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 20:22:53.929117       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 20:22:53.930030       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 20:22:53.930305       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0528 20:22:53.937867       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0528 20:22:53.938019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0528 20:22:53.956150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 20:22:53.956267       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 20:22:53.968968       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 20:22:53.969011       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 20:22:54.221073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0528 20:22:54.221123       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0528 20:22:54.240740       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 20:22:54.240883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0528 20:22:56.784464       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 20:27:45 addons-307023 kubelet[1273]: I0528 20:27:45.449614    1273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="167a9905-80e8-4cc4-810b-4126763d9076" path="/var/lib/kubelet/pods/167a9905-80e8-4cc4-810b-4126763d9076/volumes"
	May 28 20:27:55 addons-307023 kubelet[1273]: E0528 20:27:55.475714    1273 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:27:55 addons-307023 kubelet[1273]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:27:55 addons-307023 kubelet[1273]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:27:55 addons-307023 kubelet[1273]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:27:55 addons-307023 kubelet[1273]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:27:56 addons-307023 kubelet[1273]: I0528 20:27:56.084257    1273 scope.go:117] "RemoveContainer" containerID="4befefa5042be2982516cb700b3a4031a58d2b758c6a0c52516c31968ae3c1dc"
	May 28 20:27:56 addons-307023 kubelet[1273]: I0528 20:27:56.113654    1273 scope.go:117] "RemoveContainer" containerID="f3ed0d6afef331c5b29ff51d1aee27830111aee694135f0db39ba20ffa10bac6"
	May 28 20:28:55 addons-307023 kubelet[1273]: E0528 20:28:55.473243    1273 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:28:55 addons-307023 kubelet[1273]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:28:55 addons-307023 kubelet[1273]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:28:55 addons-307023 kubelet[1273]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:28:55 addons-307023 kubelet[1273]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:29:55 addons-307023 kubelet[1273]: E0528 20:29:55.465206    1273 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:29:55 addons-307023 kubelet[1273]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:29:55 addons-307023 kubelet[1273]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:29:55 addons-307023 kubelet[1273]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:29:55 addons-307023 kubelet[1273]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:30:42 addons-307023 kubelet[1273]: I0528 20:30:42.760737    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a9aa82de-329c-4c74-bdc0-f304386c8ede-tmp-dir\") pod \"a9aa82de-329c-4c74-bdc0-f304386c8ede\" (UID: \"a9aa82de-329c-4c74-bdc0-f304386c8ede\") "
	May 28 20:30:42 addons-307023 kubelet[1273]: I0528 20:30:42.760939    1273 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmjdm\" (UniqueName: \"kubernetes.io/projected/a9aa82de-329c-4c74-bdc0-f304386c8ede-kube-api-access-gmjdm\") pod \"a9aa82de-329c-4c74-bdc0-f304386c8ede\" (UID: \"a9aa82de-329c-4c74-bdc0-f304386c8ede\") "
	May 28 20:30:42 addons-307023 kubelet[1273]: I0528 20:30:42.763208    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9aa82de-329c-4c74-bdc0-f304386c8ede-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "a9aa82de-329c-4c74-bdc0-f304386c8ede" (UID: "a9aa82de-329c-4c74-bdc0-f304386c8ede"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	May 28 20:30:42 addons-307023 kubelet[1273]: I0528 20:30:42.779150    1273 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9aa82de-329c-4c74-bdc0-f304386c8ede-kube-api-access-gmjdm" (OuterVolumeSpecName: "kube-api-access-gmjdm") pod "a9aa82de-329c-4c74-bdc0-f304386c8ede" (UID: "a9aa82de-329c-4c74-bdc0-f304386c8ede"). InnerVolumeSpecName "kube-api-access-gmjdm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 28 20:30:42 addons-307023 kubelet[1273]: I0528 20:30:42.861477    1273 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a9aa82de-329c-4c74-bdc0-f304386c8ede-tmp-dir\") on node \"addons-307023\" DevicePath \"\""
	May 28 20:30:42 addons-307023 kubelet[1273]: I0528 20:30:42.861522    1273 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gmjdm\" (UniqueName: \"kubernetes.io/projected/a9aa82de-329c-4c74-bdc0-f304386c8ede-kube-api-access-gmjdm\") on node \"addons-307023\" DevicePath \"\""
	May 28 20:30:43 addons-307023 kubelet[1273]: I0528 20:30:43.110310    1273 scope.go:117] "RemoveContainer" containerID="355a251e3cf31eb25dddce88b95ad2c82fec40efb40b26bf7d9a5ecbe490c54d"
	
	
	==> storage-provisioner [5c00a4fc421b3f7ba35f835a5ffb3586ed187a922a41a09e7d62310f40f28b4a] <==
	I0528 20:23:17.851728       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 20:23:17.865856       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 20:23:17.865943       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 20:23:17.886930       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 20:23:17.887155       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-307023_0ffd0080-ee1e-4fd4-b08c-662145dfa312!
	I0528 20:23:17.887520       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27e55524-0954-481a-a161-595387a48ad7", APIVersion:"v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-307023_0ffd0080-ee1e-4fd4-b08c-662145dfa312 became leader
	I0528 20:23:17.987959       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-307023_0ffd0080-ee1e-4fd4-b08c-662145dfa312!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-307023 -n addons-307023
helpers_test.go:261: (dbg) Run:  kubectl --context addons-307023 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (340.58s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-307023
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-307023: exit status 82 (2m0.457289273s)

                                                
                                                
-- stdout --
	* Stopping node "addons-307023"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-307023" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-307023
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-307023: exit status 11 (21.616210347s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-307023" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-307023
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-307023: exit status 11 (6.143763895s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-307023" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-307023
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-307023: exit status 11 (6.143471721s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-307023" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.36s)

                                                
                                    
x
+
TestCertExpiration (1145.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-257793 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-257793 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m40.801700529s)
E0528 21:29:42.598104   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-257793 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0528 21:32:37.451431   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p cert-expiration-257793 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: exit status 109 (14m22.572539921s)

                                                
                                                
-- stdout --
	* [cert-expiration-257793] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "cert-expiration-257793" primary control-plane node in "cert-expiration-257793" cluster
	* Updating the running kvm2 "cert-expiration-257793" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Certificate client.crt has expired. Generating a new one...
	! Certificate apiserver.crt.fd482c32 has expired. Generating a new one...
	! Certificate proxy-client.crt has expired. Generating a new one...
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001185465s
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000926641s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.668534ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000155192s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.668534ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000155192s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-linux-amd64 start -p cert-expiration-257793 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio" : exit status 109
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-05-28 21:46:56.765500842 +0000 UTC m=+5147.521579463
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-257793 -n cert-expiration-257793
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-257793 -n cert-expiration-257793: exit status 2 (216.981977ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p cert-expiration-257793 logs -n 25
helpers_test.go:252: TestCertExpiration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-110727 sudo cat                  | enable-default-cni-110727 | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727 | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo cri-dockerd --version                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727 | jenkins | v1.33.1 | 28 May 24 21:39 UTC |                     |
	|         | sudo systemctl status                                  |                           |         |         |                     |                     |
	|         | containerd --all --full                                |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727 | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo systemctl cat containerd                          |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727 sudo cat                  | enable-default-cni-110727 | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | /lib/systemd/system/containerd.service                 |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727 | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo cat                                               |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727 | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo containerd config dump                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727 | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo systemctl status crio                             |                           |         |         |                     |                     |
	|         | --all --full --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727 | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo systemctl cat crio                                |                           |         |         |                     |                     |
	|         | --no-pager                                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727 | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo find /etc/crio -type f                            |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                           |         |         |                     |                     |
	|         | \;                                                     |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727 | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo crio config                                       |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-110727                           | enable-default-cni-110727 | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	| start   | -p embed-certs-595279                                  | embed-certs-595279        | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:41 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-290122             | no-preload-290122         | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-290122                                   | no-preload-290122         | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-595279            | embed-certs-595279        | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p embed-certs-595279                                  | embed-certs-595279        | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-499466        | old-k8s-version-499466    | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-290122                  | no-preload-290122         | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-595279                 | embed-certs-595279        | jenkins | v1.33.1 | 28 May 24 21:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-290122                                   | no-preload-290122         | jenkins | v1.33.1 | 28 May 24 21:44 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                           |         |         |                     |                     |
	| start   | -p embed-certs-595279                                  | embed-certs-595279        | jenkins | v1.33.1 | 28 May 24 21:44 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-499466                              | old-k8s-version-499466    | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-499466             | old-k8s-version-499466    | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-499466                              | old-k8s-version-499466    | jenkins | v1.33.1 | 28 May 24 21:45 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:45:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:45:09.511734   70393 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:45:09.512015   70393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:45:09.512024   70393 out.go:304] Setting ErrFile to fd 2...
	I0528 21:45:09.512029   70393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:45:09.512230   70393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:45:09.512722   70393 out.go:298] Setting JSON to false
	I0528 21:45:09.513628   70393 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5252,"bootTime":1716927457,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:45:09.513688   70393 start.go:139] virtualization: kvm guest
	I0528 21:45:09.515710   70393 out.go:177] * [old-k8s-version-499466] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:45:09.516851   70393 notify.go:220] Checking for updates...
	I0528 21:45:09.516855   70393 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:45:09.518143   70393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:45:09.519313   70393 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:45:09.520458   70393 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:45:09.521564   70393 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:45:09.522750   70393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:45:09.524143   70393 config.go:182] Loaded profile config "old-k8s-version-499466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0528 21:45:09.524521   70393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:45:09.524564   70393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:45:09.538978   70393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40875
	I0528 21:45:09.539311   70393 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:45:09.539762   70393 main.go:141] libmachine: Using API Version  1
	I0528 21:45:09.539785   70393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:45:09.540071   70393 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:45:09.540270   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:45:09.541692   70393 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0528 21:45:09.542685   70393 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:45:09.542974   70393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:45:09.543016   70393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:45:09.556837   70393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41363
	I0528 21:45:09.557242   70393 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:45:09.557708   70393 main.go:141] libmachine: Using API Version  1
	I0528 21:45:09.557733   70393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:45:09.558014   70393 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:45:09.558272   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:45:09.591821   70393 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:45:09.593187   70393 start.go:297] selected driver: kvm2
	I0528 21:45:09.593202   70393 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-499466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-499466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:45:09.593310   70393 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:45:09.594048   70393 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:45:09.594116   70393 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:45:09.608513   70393 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:45:09.608837   70393 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:45:09.608887   70393 cni.go:84] Creating CNI manager for ""
	I0528 21:45:09.608900   70393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:45:09.608947   70393 start.go:340] cluster config:
	{Name:old-k8s-version-499466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-499466 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:45:09.609033   70393 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:45:09.610774   70393 out.go:177] * Starting "old-k8s-version-499466" primary control-plane node in "old-k8s-version-499466" cluster
	I0528 21:45:09.611958   70393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 21:45:09.611994   70393 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0528 21:45:09.612009   70393 cache.go:56] Caching tarball of preloaded images
	I0528 21:45:09.612080   70393 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:45:09.612090   70393 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0528 21:45:09.612179   70393 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/config.json ...
	I0528 21:45:09.612358   70393 start.go:360] acquireMachinesLock for old-k8s-version-499466: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:45:14.361922   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:45:17.433991   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:45:23.513939   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:45:26.585996   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:45:32.666037   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:45:35.738110   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:45:41.818009   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:45:44.889968   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:45:50.969975   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:45:54.042009   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:46:00.122004   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:46:03.194017   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:46:09.274008   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:46:12.346016   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:46:18.426046   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:46:21.497976   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:46:27.578077   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:46:30.650040   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:46:36.730041   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:46:39.802088   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:46:45.881994   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:46:48.954026   69886 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.138:22: connect: no route to host
	I0528 21:46:55.827671   53940 kubeadm.go:309] [api-check] The API server is not healthy after 4m0.000155192s
	I0528 21:46:55.827692   53940 kubeadm.go:309] 
	I0528 21:46:55.827723   53940 kubeadm.go:309] Unfortunately, an error has occurred:
	I0528 21:46:55.827746   53940 kubeadm.go:309] 	context deadline exceeded
	I0528 21:46:55.827750   53940 kubeadm.go:309] 
	I0528 21:46:55.827776   53940 kubeadm.go:309] This error is likely caused by:
	I0528 21:46:55.827800   53940 kubeadm.go:309] 	- The kubelet is not running
	I0528 21:46:55.827889   53940 kubeadm.go:309] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:46:55.827894   53940 kubeadm.go:309] 
	I0528 21:46:55.827980   53940 kubeadm.go:309] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:46:55.828009   53940 kubeadm.go:309] 	- 'systemctl status kubelet'
	I0528 21:46:55.828033   53940 kubeadm.go:309] 	- 'journalctl -xeu kubelet'
	I0528 21:46:55.828036   53940 kubeadm.go:309] 
	I0528 21:46:55.828117   53940 kubeadm.go:309] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:46:55.828212   53940 kubeadm.go:309] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:46:55.828340   53940 kubeadm.go:309] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0528 21:46:55.828445   53940 kubeadm.go:309] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:46:55.828520   53940 kubeadm.go:309] 	Once you have found the failing container, you can inspect its logs with:
	I0528 21:46:55.828588   53940 kubeadm.go:309] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:46:55.829340   53940 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:46:55.829415   53940 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:46:55.829467   53940 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0528 21:46:55.829515   53940 kubeadm.go:393] duration metric: took 12m20.369673708s to StartCluster
	I0528 21:46:55.829543   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:46:55.829588   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:46:55.872130   53940 cri.go:89] found id: "90c9c6f162ba1eb377805dd9238d42c4d6ae755187b7fa398d84cd8a0f5f4d87"
	I0528 21:46:55.872139   53940 cri.go:89] found id: ""
	I0528 21:46:55.872145   53940 logs.go:276] 1 containers: [90c9c6f162ba1eb377805dd9238d42c4d6ae755187b7fa398d84cd8a0f5f4d87]
	I0528 21:46:55.872188   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:46:55.876443   53940 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:46:55.876492   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:46:55.910002   53940 cri.go:89] found id: ""
	I0528 21:46:55.910016   53940 logs.go:276] 0 containers: []
	W0528 21:46:55.910024   53940 logs.go:278] No container was found matching "etcd"
	I0528 21:46:55.910030   53940 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:46:55.910087   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:46:55.949721   53940 cri.go:89] found id: ""
	I0528 21:46:55.949735   53940 logs.go:276] 0 containers: []
	W0528 21:46:55.949743   53940 logs.go:278] No container was found matching "coredns"
	I0528 21:46:55.949747   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:46:55.949798   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:46:55.984116   53940 cri.go:89] found id: "ec3bd9c3185e56b249c77bce098419eab51555165f6f39aeea6c2e5317d8777a"
	I0528 21:46:55.984124   53940 cri.go:89] found id: ""
	I0528 21:46:55.984129   53940 logs.go:276] 1 containers: [ec3bd9c3185e56b249c77bce098419eab51555165f6f39aeea6c2e5317d8777a]
	I0528 21:46:55.984163   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:46:55.988063   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:46:55.988107   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:46:56.022341   53940 cri.go:89] found id: ""
	I0528 21:46:56.022352   53940 logs.go:276] 0 containers: []
	W0528 21:46:56.022357   53940 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:46:56.022361   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:46:56.022406   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:46:56.059896   53940 cri.go:89] found id: "f7db65d803a9cc473510082076c295e2d481ee224a5553a921e8a802c175b6ca"
	I0528 21:46:56.059910   53940 cri.go:89] found id: ""
	I0528 21:46:56.059916   53940 logs.go:276] 1 containers: [f7db65d803a9cc473510082076c295e2d481ee224a5553a921e8a802c175b6ca]
	I0528 21:46:56.059956   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:46:56.064036   53940 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:46:56.064081   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:46:56.099792   53940 cri.go:89] found id: ""
	I0528 21:46:56.099802   53940 logs.go:276] 0 containers: []
	W0528 21:46:56.099807   53940 logs.go:278] No container was found matching "kindnet"
	I0528 21:46:56.099811   53940 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:46:56.099852   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:46:56.136437   53940 cri.go:89] found id: ""
	I0528 21:46:56.136451   53940 logs.go:276] 0 containers: []
	W0528 21:46:56.136460   53940 logs.go:278] No container was found matching "storage-provisioner"
	I0528 21:46:56.136469   53940 logs.go:123] Gathering logs for kube-apiserver [90c9c6f162ba1eb377805dd9238d42c4d6ae755187b7fa398d84cd8a0f5f4d87] ...
	I0528 21:46:56.136484   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90c9c6f162ba1eb377805dd9238d42c4d6ae755187b7fa398d84cd8a0f5f4d87"
	I0528 21:46:56.173985   53940 logs.go:123] Gathering logs for kube-scheduler [ec3bd9c3185e56b249c77bce098419eab51555165f6f39aeea6c2e5317d8777a] ...
	I0528 21:46:56.173999   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec3bd9c3185e56b249c77bce098419eab51555165f6f39aeea6c2e5317d8777a"
	I0528 21:46:56.248078   53940 logs.go:123] Gathering logs for kube-controller-manager [f7db65d803a9cc473510082076c295e2d481ee224a5553a921e8a802c175b6ca] ...
	I0528 21:46:56.248092   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7db65d803a9cc473510082076c295e2d481ee224a5553a921e8a802c175b6ca"
	I0528 21:46:56.283105   53940 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:46:56.283122   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:46:56.506470   53940 logs.go:123] Gathering logs for container status ...
	I0528 21:46:56.506489   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:46:56.545042   53940 logs.go:123] Gathering logs for kubelet ...
	I0528 21:46:56.545055   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:46:56.666847   53940 logs.go:123] Gathering logs for dmesg ...
	I0528 21:46:56.666863   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:46:56.681086   53940 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:46:56.681098   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:46:56.751825   53940 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0528 21:46:56.751865   53940 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.668534ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000155192s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0528 21:46:56.751889   53940 out.go:239] * 
	W0528 21:46:56.751973   53940 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.668534ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000155192s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:46:56.751997   53940 out.go:239] * 
	W0528 21:46:56.752844   53940 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 21:46:56.755947   53940 out.go:177] 
	W0528 21:46:56.757378   53940 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.668534ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000155192s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:46:56.757429   53940 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0528 21:46:56.757456   53940 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0528 21:46:56.759856   53940 out.go:177] 
	
	
	==> CRI-O <==
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.290672691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716932817290650022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56521ba6-06cb-48ce-8d58-11a0853e3d2c name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.291100039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=954ed51b-f631-47df-92d1-ae82648eb5e8 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.291153190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=954ed51b-f631-47df-92d1-ae82648eb5e8 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.291248098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90c9c6f162ba1eb377805dd9238d42c4d6ae755187b7fa398d84cd8a0f5f4d87,PodSandboxId:8ffc46bfe1f787dc7e6c8103010981b4ee867ea8aa597d060ef8eb2aeb51cafa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716932766631453337,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-257793,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0398ebb7aaf83b2b1c8289ae75c52939,},Annotations:map[string]string{io.kubernetes.container.hash: c4a85004,io.kubernetes.container.restartCount: 15,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7db65d803a9cc473510082076c295e2d481ee224a5553a921e8a802c175b6ca,PodSandboxId:de03fee09289e13cc16bf75943ec8433b16ce767262e07857e1edee1ab5628f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716932762637276279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-257793,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58109ef96fcb89c0d240d34703e2726e,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.rest
artCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec3bd9c3185e56b249c77bce098419eab51555165f6f39aeea6c2e5317d8777a,PodSandboxId:3b460f59ee70d8d4904690f555d3bf40fb627a9d01afe8374694cd5953b87ea4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932576272275706,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-257793,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b16cb02ec458ae435a02e076cd1d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCoun
t: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=954ed51b-f631-47df-92d1-ae82648eb5e8 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.322729872Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7df155f8-c02f-4c7a-8cbd-d2029076464a name=/runtime.v1.RuntimeService/Version
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.322793633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7df155f8-c02f-4c7a-8cbd-d2029076464a name=/runtime.v1.RuntimeService/Version
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.323968821Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=078940e0-75ed-425f-b8dd-88ee7bf4669d name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.324337106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716932817324313779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=078940e0-75ed-425f-b8dd-88ee7bf4669d name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.324788994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8c85774-d58a-4cf5-8658-fd1ff36d4f72 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.324865840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8c85774-d58a-4cf5-8658-fd1ff36d4f72 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.324974514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90c9c6f162ba1eb377805dd9238d42c4d6ae755187b7fa398d84cd8a0f5f4d87,PodSandboxId:8ffc46bfe1f787dc7e6c8103010981b4ee867ea8aa597d060ef8eb2aeb51cafa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716932766631453337,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-257793,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0398ebb7aaf83b2b1c8289ae75c52939,},Annotations:map[string]string{io.kubernetes.container.hash: c4a85004,io.kubernetes.container.restartCount: 15,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7db65d803a9cc473510082076c295e2d481ee224a5553a921e8a802c175b6ca,PodSandboxId:de03fee09289e13cc16bf75943ec8433b16ce767262e07857e1edee1ab5628f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716932762637276279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-257793,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58109ef96fcb89c0d240d34703e2726e,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.rest
artCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec3bd9c3185e56b249c77bce098419eab51555165f6f39aeea6c2e5317d8777a,PodSandboxId:3b460f59ee70d8d4904690f555d3bf40fb627a9d01afe8374694cd5953b87ea4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932576272275706,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-257793,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b16cb02ec458ae435a02e076cd1d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCoun
t: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8c85774-d58a-4cf5-8658-fd1ff36d4f72 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.361754981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9d10b98-9a9c-4944-a98a-8d291c3ee317 name=/runtime.v1.RuntimeService/Version
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.361840089Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9d10b98-9a9c-4944-a98a-8d291c3ee317 name=/runtime.v1.RuntimeService/Version
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.362894403Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a295e633-2933-4571-b5cb-3d998b2e1762 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.363240470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716932817363220768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a295e633-2933-4571-b5cb-3d998b2e1762 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.363745961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7857b957-8bf7-4bf0-9e77-5762c876e9ad name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.363830481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7857b957-8bf7-4bf0-9e77-5762c876e9ad name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.363924191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90c9c6f162ba1eb377805dd9238d42c4d6ae755187b7fa398d84cd8a0f5f4d87,PodSandboxId:8ffc46bfe1f787dc7e6c8103010981b4ee867ea8aa597d060ef8eb2aeb51cafa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716932766631453337,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-257793,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0398ebb7aaf83b2b1c8289ae75c52939,},Annotations:map[string]string{io.kubernetes.container.hash: c4a85004,io.kubernetes.container.restartCount: 15,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7db65d803a9cc473510082076c295e2d481ee224a5553a921e8a802c175b6ca,PodSandboxId:de03fee09289e13cc16bf75943ec8433b16ce767262e07857e1edee1ab5628f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716932762637276279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-257793,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58109ef96fcb89c0d240d34703e2726e,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.rest
artCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec3bd9c3185e56b249c77bce098419eab51555165f6f39aeea6c2e5317d8777a,PodSandboxId:3b460f59ee70d8d4904690f555d3bf40fb627a9d01afe8374694cd5953b87ea4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932576272275706,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-257793,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b16cb02ec458ae435a02e076cd1d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCoun
t: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7857b957-8bf7-4bf0-9e77-5762c876e9ad name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.399387296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7aceb47f-e793-42de-b2ef-5c7451f168ca name=/runtime.v1.RuntimeService/Version
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.399473467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7aceb47f-e793-42de-b2ef-5c7451f168ca name=/runtime.v1.RuntimeService/Version
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.400435787Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=193b93b6-1d73-43db-b82f-dcec6c729a42 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.400841264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716932817400821114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=193b93b6-1d73-43db-b82f-dcec6c729a42 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.401375614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af4ae6ee-e873-4153-a699-661e485b4506 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.401446687Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af4ae6ee-e873-4153-a699-661e485b4506 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:46:57 cert-expiration-257793 crio[2965]: time="2024-05-28 21:46:57.401570400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:90c9c6f162ba1eb377805dd9238d42c4d6ae755187b7fa398d84cd8a0f5f4d87,PodSandboxId:8ffc46bfe1f787dc7e6c8103010981b4ee867ea8aa597d060ef8eb2aeb51cafa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716932766631453337,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-257793,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0398ebb7aaf83b2b1c8289ae75c52939,},Annotations:map[string]string{io.kubernetes.container.hash: c4a85004,io.kubernetes.container.restartCount: 15,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7db65d803a9cc473510082076c295e2d481ee224a5553a921e8a802c175b6ca,PodSandboxId:de03fee09289e13cc16bf75943ec8433b16ce767262e07857e1edee1ab5628f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:15,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716932762637276279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-257793,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58109ef96fcb89c0d240d34703e2726e,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.rest
artCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec3bd9c3185e56b249c77bce098419eab51555165f6f39aeea6c2e5317d8777a,PodSandboxId:3b460f59ee70d8d4904690f555d3bf40fb627a9d01afe8374694cd5953b87ea4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932576272275706,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-cert-expiration-257793,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20b16cb02ec458ae435a02e076cd1d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCoun
t: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af4ae6ee-e873-4153-a699-661e485b4506 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	90c9c6f162ba1       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   50 seconds ago      Exited              kube-apiserver            15                  8ffc46bfe1f78       kube-apiserver-cert-expiration-257793
	f7db65d803a9c       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   54 seconds ago      Exited              kube-controller-manager   15                  de03fee09289e       kube-controller-manager-cert-expiration-257793
	ec3bd9c3185e5       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   4 minutes ago       Running             kube-scheduler            4                   3b460f59ee70d       kube-scheduler-cert-expiration-257793
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.060304] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.199263] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.137060] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.283303] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.128858] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.309403] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.063160] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.049575] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.106285] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.286994] systemd-fstab-generator[1371]: Ignoring "noauto" option for root device
	[ +13.071185] kauditd_printk_skb: 49 callbacks suppressed
	[May28 21:30] kauditd_printk_skb: 55 callbacks suppressed
	[May28 21:33] systemd-fstab-generator[2647]: Ignoring "noauto" option for root device
	[  +0.315247] systemd-fstab-generator[2707]: Ignoring "noauto" option for root device
	[  +0.316692] systemd-fstab-generator[2733]: Ignoring "noauto" option for root device
	[  +0.249551] systemd-fstab-generator[2763]: Ignoring "noauto" option for root device
	[  +0.442455] systemd-fstab-generator[2815]: Ignoring "noauto" option for root device
	[May28 21:34] systemd-fstab-generator[3077]: Ignoring "noauto" option for root device
	[  +0.085489] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.531636] systemd-fstab-generator[3202]: Ignoring "noauto" option for root device
	[ +22.528419] kauditd_printk_skb: 79 callbacks suppressed
	[May28 21:38] systemd-fstab-generator[9536]: Ignoring "noauto" option for root device
	[May28 21:39] kauditd_printk_skb: 71 callbacks suppressed
	[May28 21:42] systemd-fstab-generator[11403]: Ignoring "noauto" option for root device
	[May28 21:43] kauditd_printk_skb: 54 callbacks suppressed
	
	
	==> kernel <==
	 21:46:57 up 18 min,  0 users,  load average: 0.01, 0.09, 0.12
	Linux cert-expiration-257793 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [90c9c6f162ba1eb377805dd9238d42c4d6ae755187b7fa398d84cd8a0f5f4d87] <==
	I0528 21:46:06.805117       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0528 21:46:07.249753       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0528 21:46:07.250974       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	W0528 21:46:07.251238       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0528 21:46:07.251238       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0528 21:46:07.251455       1 instance.go:299] Using reconciler: lease
	I0528 21:46:07.251044       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 21:46:07.251005       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0528 21:46:07.253260       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:08.251756       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:08.251756       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:08.254181       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:09.695490       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:09.882295       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:09.993252       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:12.117483       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:12.523657       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:12.772974       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:16.190138       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:17.067919       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:17.282031       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:22.334202       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:23.526174       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:46:24.000464       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0528 21:46:27.253096       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [f7db65d803a9cc473510082076c295e2d481ee224a5553a921e8a802c175b6ca] <==
	I0528 21:46:03.148670       1 serving.go:380] Generated self-signed cert in-memory
	I0528 21:46:03.923421       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0528 21:46:03.923460       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:46:03.926305       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0528 21:46:03.927022       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0528 21:46:03.927194       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 21:46:03.927470       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0528 21:46:26.931787       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.72.246:8443/healthz\": net/http: TLS handshake timeout"
	
	
	==> kube-scheduler [ec3bd9c3185e56b249c77bce098419eab51555165f6f39aeea6c2e5317d8777a] <==
	W0528 21:46:28.257229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.72.246:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.246:60488->192.168.72.246:8443: read: connection reset by peer
	W0528 21:46:28.257365       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.72.246:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.246:35056->192.168.72.246:8443: read: connection reset by peer
	E0528 21:46:28.257418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.72.246:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.246:35056->192.168.72.246:8443: read: connection reset by peer
	I0528 21:46:28.257458       1 trace.go:236] Trace[1466512205]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-May-2024 21:46:17.906) (total time: 10351ms):
	Trace[1466512205]: ---"Objects listed" error:Get "https://192.168.72.246:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.246:60488->192.168.72.246:8443: read: connection reset by peer 10350ms (21:46:28.257)
	Trace[1466512205]: [10.351158965s] [10.351158965s] END
	E0528 21:46:28.257471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.72.246:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.246:60488->192.168.72.246:8443: read: connection reset by peer
	W0528 21:46:28.909156       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.246:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	E0528 21:46:28.909229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.246:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	W0528 21:46:31.507546       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.72.246:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	E0528 21:46:31.507607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.72.246:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	W0528 21:46:32.313049       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.72.246:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	E0528 21:46:32.313135       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.72.246:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	W0528 21:46:32.685777       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.72.246:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	E0528 21:46:32.685826       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.72.246:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	W0528 21:46:33.750453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.72.246:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	E0528 21:46:33.750589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.72.246:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	W0528 21:46:34.721177       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.72.246:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	E0528 21:46:34.721218       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.72.246:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	W0528 21:46:42.393219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.72.246:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	E0528 21:46:42.393275       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.72.246:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	W0528 21:46:46.197122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.72.246:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	E0528 21:46:46.197198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.72.246:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	W0528 21:46:56.915982       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.72.246:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	E0528 21:46:56.916042       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.72.246:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	
	
	==> kubelet <==
	May 28 21:46:44 cert-expiration-257793 kubelet[11410]: I0528 21:46:44.623381   11410 scope.go:117] "RemoveContainer" containerID="f7db65d803a9cc473510082076c295e2d481ee224a5553a921e8a802c175b6ca"
	May 28 21:46:44 cert-expiration-257793 kubelet[11410]: E0528 21:46:44.623778   11410 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-cert-expiration-257793_kube-system(58109ef96fcb89c0d240d34703e2726e)\"" pod="kube-system/kube-controller-manager-cert-expiration-257793" podUID="58109ef96fcb89c0d240d34703e2726e"
	May 28 21:46:45 cert-expiration-257793 kubelet[11410]: E0528 21:46:45.681089   11410 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"cert-expiration-257793\" not found"
	May 28 21:46:47 cert-expiration-257793 kubelet[11410]: I0528 21:46:47.622994   11410 scope.go:117] "RemoveContainer" containerID="90c9c6f162ba1eb377805dd9238d42c4d6ae755187b7fa398d84cd8a0f5f4d87"
	May 28 21:46:47 cert-expiration-257793 kubelet[11410]: E0528 21:46:47.623391   11410 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-cert-expiration-257793_kube-system(0398ebb7aaf83b2b1c8289ae75c52939)\"" pod="kube-system/kube-apiserver-cert-expiration-257793" podUID="0398ebb7aaf83b2b1c8289ae75c52939"
	May 28 21:46:49 cert-expiration-257793 kubelet[11410]: W0528 21:46:49.346172   11410 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	May 28 21:46:49 cert-expiration-257793 kubelet[11410]: E0528 21:46:49.346645   11410 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.246:8443: connect: connection refused
	May 28 21:46:49 cert-expiration-257793 kubelet[11410]: I0528 21:46:49.361409   11410 kubelet_node_status.go:73] "Attempting to register node" node="cert-expiration-257793"
	May 28 21:46:49 cert-expiration-257793 kubelet[11410]: E0528 21:46:49.362224   11410 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.246:8443: connect: connection refused" node="cert-expiration-257793"
	May 28 21:46:50 cert-expiration-257793 kubelet[11410]: E0528 21:46:50.327213   11410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-257793?timeout=10s\": dial tcp 192.168.72.246:8443: connect: connection refused" interval="7s"
	May 28 21:46:53 cert-expiration-257793 kubelet[11410]: E0528 21:46:53.918382   11410 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.72.246:8443: connect: connection refused" event="&Event{ObjectMeta:{cert-expiration-257793.17d3c514b29cc2a9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:cert-expiration-257793,UID:cert-expiration-257793,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node cert-expiration-257793 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:cert-expiration-257793,},FirstTimestamp:2024-05-28 21:42:55.644926633 +0000 UTC m=+0.360595280,LastTimestamp:2024-05-28 21:42:55.644926633 +0000 UTC m=+0.360595280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingControll
er:kubelet,ReportingInstance:cert-expiration-257793,}"
	May 28 21:46:55 cert-expiration-257793 kubelet[11410]: E0528 21:46:55.628982   11410 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-cert-expiration-257793_kube-system_879ecb28d4a1906b20ef2628906e2f15_1\" is already in use by 65222c4900dee728bc60d68c2905a4ae2f4ecdfa28156ff5dde5efb449fcc01d. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="eae1ec348bc88de0e0d24fb47e67a014c3b4457bc2ca5ce5d3512466596b5b2c"
	May 28 21:46:55 cert-expiration-257793 kubelet[11410]: E0528 21:46:55.629122   11410 kuberuntime_manager.go:1256] container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.12-0,Command:[etcd --advertise-client-urls=https://192.168.72.246:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.72.246:2380 --initial-cluster=cert-expiration-257793=https://192.168.72.246:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.72.246:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.72.246:2380 --name=cert-expiration-257793 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt
--proxy-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health?exclude=NOSPACE&serializable=true,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSecon
ds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health?serializable=false,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-cert-expiration-257793_kube-system(879ecb28d4a1906b20ef2628906e2f15): CreateContainerError: the container name "k8s_etcd_etcd-cert-expiration-257793_kube-system_879ecb28d4a1906b20ef2628906e2f15_1" is already in use by 65222c4900dee728bc60d68c2905a4ae2f4ecdfa28156ff5dde5efb449fcc01d. You have to remove that container to be able to reuse tha
t name: that name is already in use
	May 28 21:46:55 cert-expiration-257793 kubelet[11410]: E0528 21:46:55.629149   11410 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-cert-expiration-257793_kube-system_879ecb28d4a1906b20ef2628906e2f15_1\\\" is already in use by 65222c4900dee728bc60d68c2905a4ae2f4ecdfa28156ff5dde5efb449fcc01d. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-cert-expiration-257793" podUID="879ecb28d4a1906b20ef2628906e2f15"
	May 28 21:46:55 cert-expiration-257793 kubelet[11410]: E0528 21:46:55.646006   11410 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 21:46:55 cert-expiration-257793 kubelet[11410]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 21:46:55 cert-expiration-257793 kubelet[11410]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 21:46:55 cert-expiration-257793 kubelet[11410]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 21:46:55 cert-expiration-257793 kubelet[11410]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 21:46:55 cert-expiration-257793 kubelet[11410]: E0528 21:46:55.681601   11410 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"cert-expiration-257793\" not found"
	May 28 21:46:56 cert-expiration-257793 kubelet[11410]: I0528 21:46:56.363770   11410 kubelet_node_status.go:73] "Attempting to register node" node="cert-expiration-257793"
	May 28 21:46:56 cert-expiration-257793 kubelet[11410]: E0528 21:46:56.364467   11410 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.246:8443: connect: connection refused" node="cert-expiration-257793"
	May 28 21:46:56 cert-expiration-257793 kubelet[11410]: I0528 21:46:56.622941   11410 scope.go:117] "RemoveContainer" containerID="f7db65d803a9cc473510082076c295e2d481ee224a5553a921e8a802c175b6ca"
	May 28 21:46:56 cert-expiration-257793 kubelet[11410]: E0528 21:46:56.623208   11410 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-cert-expiration-257793_kube-system(58109ef96fcb89c0d240d34703e2726e)\"" pod="kube-system/kube-controller-manager-cert-expiration-257793" podUID="58109ef96fcb89c0d240d34703e2726e"
	May 28 21:46:57 cert-expiration-257793 kubelet[11410]: E0528 21:46:57.328376   11410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-257793?timeout=10s\": dial tcp 192.168.72.246:8443: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-257793 -n cert-expiration-257793
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-257793 -n cert-expiration-257793: exit status 2 (211.250223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "cert-expiration-257793" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-257793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-257793
--- FAIL: TestCertExpiration (1145.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 node stop m02 -v=7 --alsologtostderr
E0528 20:42:57.932822   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:43:18.413335   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:43:59.374163   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:44:42.598487   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-908878 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.467548239s)

                                                
                                                
-- stdout --
	* Stopping node "ha-908878-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:42:56.052428   26575 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:42:56.052829   26575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:42:56.052839   26575 out.go:304] Setting ErrFile to fd 2...
	I0528 20:42:56.052844   26575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:42:56.053099   26575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:42:56.053380   26575 mustload.go:65] Loading cluster: ha-908878
	I0528 20:42:56.053806   26575 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:42:56.053821   26575 stop.go:39] StopHost: ha-908878-m02
	I0528 20:42:56.054193   26575 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:42:56.054244   26575 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:42:56.069213   26575 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I0528 20:42:56.069702   26575 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:42:56.070297   26575 main.go:141] libmachine: Using API Version  1
	I0528 20:42:56.070321   26575 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:42:56.070677   26575 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:42:56.072772   26575 out.go:177] * Stopping node "ha-908878-m02"  ...
	I0528 20:42:56.073965   26575 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0528 20:42:56.074003   26575 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:42:56.074206   26575 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0528 20:42:56.074231   26575 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:42:56.076778   26575 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:42:56.077277   26575 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:42:56.077312   26575 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:42:56.077457   26575 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:42:56.077625   26575 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:42:56.077749   26575 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:42:56.077901   26575 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	I0528 20:42:56.164477   26575 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0528 20:42:56.218137   26575 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0528 20:42:56.274777   26575 main.go:141] libmachine: Stopping "ha-908878-m02"...
	I0528 20:42:56.274823   26575 main.go:141] libmachine: (ha-908878-m02) Calling .GetState
	I0528 20:42:56.276311   26575 main.go:141] libmachine: (ha-908878-m02) Calling .Stop
	I0528 20:42:56.279520   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 0/120
	I0528 20:42:57.280835   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 1/120
	I0528 20:42:58.282617   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 2/120
	I0528 20:42:59.284107   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 3/120
	I0528 20:43:00.285468   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 4/120
	I0528 20:43:01.287633   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 5/120
	I0528 20:43:02.289221   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 6/120
	I0528 20:43:03.290744   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 7/120
	I0528 20:43:04.293039   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 8/120
	I0528 20:43:05.294461   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 9/120
	I0528 20:43:06.296651   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 10/120
	I0528 20:43:07.298045   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 11/120
	I0528 20:43:08.299806   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 12/120
	I0528 20:43:09.301429   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 13/120
	I0528 20:43:10.303538   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 14/120
	I0528 20:43:11.305595   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 15/120
	I0528 20:43:12.307437   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 16/120
	I0528 20:43:13.308869   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 17/120
	I0528 20:43:14.310086   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 18/120
	I0528 20:43:15.311786   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 19/120
	I0528 20:43:16.313227   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 20/120
	I0528 20:43:17.314660   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 21/120
	I0528 20:43:18.315919   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 22/120
	I0528 20:43:19.318254   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 23/120
	I0528 20:43:20.320542   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 24/120
	I0528 20:43:21.322372   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 25/120
	I0528 20:43:22.324307   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 26/120
	I0528 20:43:23.326028   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 27/120
	I0528 20:43:24.328282   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 28/120
	I0528 20:43:25.329628   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 29/120
	I0528 20:43:26.331934   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 30/120
	I0528 20:43:27.333587   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 31/120
	I0528 20:43:28.335533   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 32/120
	I0528 20:43:29.336750   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 33/120
	I0528 20:43:30.338298   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 34/120
	I0528 20:43:31.340114   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 35/120
	I0528 20:43:32.341513   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 36/120
	I0528 20:43:33.342974   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 37/120
	I0528 20:43:34.344513   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 38/120
	I0528 20:43:35.345842   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 39/120
	I0528 20:43:36.348078   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 40/120
	I0528 20:43:37.349645   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 41/120
	I0528 20:43:38.351119   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 42/120
	I0528 20:43:39.352851   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 43/120
	I0528 20:43:40.354204   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 44/120
	I0528 20:43:41.356512   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 45/120
	I0528 20:43:42.357942   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 46/120
	I0528 20:43:43.360199   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 47/120
	I0528 20:43:44.362624   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 48/120
	I0528 20:43:45.364138   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 49/120
	I0528 20:43:46.365466   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 50/120
	I0528 20:43:47.367052   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 51/120
	I0528 20:43:48.368453   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 52/120
	I0528 20:43:49.369832   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 53/120
	I0528 20:43:50.371122   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 54/120
	I0528 20:43:51.372983   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 55/120
	I0528 20:43:52.374383   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 56/120
	I0528 20:43:53.376228   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 57/120
	I0528 20:43:54.377824   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 58/120
	I0528 20:43:55.379045   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 59/120
	I0528 20:43:56.381303   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 60/120
	I0528 20:43:57.382688   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 61/120
	I0528 20:43:58.384192   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 62/120
	I0528 20:43:59.385555   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 63/120
	I0528 20:44:00.386964   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 64/120
	I0528 20:44:01.388782   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 65/120
	I0528 20:44:02.390362   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 66/120
	I0528 20:44:03.391646   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 67/120
	I0528 20:44:04.392871   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 68/120
	I0528 20:44:05.394203   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 69/120
	I0528 20:44:06.396239   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 70/120
	I0528 20:44:07.397649   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 71/120
	I0528 20:44:08.399010   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 72/120
	I0528 20:44:09.401203   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 73/120
	I0528 20:44:10.403490   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 74/120
	I0528 20:44:11.405840   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 75/120
	I0528 20:44:12.407205   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 76/120
	I0528 20:44:13.409350   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 77/120
	I0528 20:44:14.410690   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 78/120
	I0528 20:44:15.412284   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 79/120
	I0528 20:44:16.414191   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 80/120
	I0528 20:44:17.416208   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 81/120
	I0528 20:44:18.417558   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 82/120
	I0528 20:44:19.419204   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 83/120
	I0528 20:44:20.420506   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 84/120
	I0528 20:44:21.422424   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 85/120
	I0528 20:44:22.423887   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 86/120
	I0528 20:44:23.425173   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 87/120
	I0528 20:44:24.427028   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 88/120
	I0528 20:44:25.428336   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 89/120
	I0528 20:44:26.430050   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 90/120
	I0528 20:44:27.432009   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 91/120
	I0528 20:44:28.433482   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 92/120
	I0528 20:44:29.434824   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 93/120
	I0528 20:44:30.436169   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 94/120
	I0528 20:44:31.437447   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 95/120
	I0528 20:44:32.438828   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 96/120
	I0528 20:44:33.440145   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 97/120
	I0528 20:44:34.441336   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 98/120
	I0528 20:44:35.442591   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 99/120
	I0528 20:44:36.444662   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 100/120
	I0528 20:44:37.446345   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 101/120
	I0528 20:44:38.448167   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 102/120
	I0528 20:44:39.449574   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 103/120
	I0528 20:44:40.450862   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 104/120
	I0528 20:44:41.452469   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 105/120
	I0528 20:44:42.454065   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 106/120
	I0528 20:44:43.456319   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 107/120
	I0528 20:44:44.457637   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 108/120
	I0528 20:44:45.460080   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 109/120
	I0528 20:44:46.462178   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 110/120
	I0528 20:44:47.463786   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 111/120
	I0528 20:44:48.465021   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 112/120
	I0528 20:44:49.466356   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 113/120
	I0528 20:44:50.467797   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 114/120
	I0528 20:44:51.470011   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 115/120
	I0528 20:44:52.472304   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 116/120
	I0528 20:44:53.473687   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 117/120
	I0528 20:44:54.475046   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 118/120
	I0528 20:44:55.476531   26575 main.go:141] libmachine: (ha-908878-m02) Waiting for machine to stop 119/120
	I0528 20:44:56.477294   26575 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0528 20:44:56.477479   26575 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-908878 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr: exit status 3 (19.033268099s)

                                                
                                                
-- stdout --
	ha-908878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-908878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:44:56.523403   27022 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:44:56.523664   27022 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:44:56.523673   27022 out.go:304] Setting ErrFile to fd 2...
	I0528 20:44:56.523677   27022 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:44:56.523918   27022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:44:56.524106   27022 out.go:298] Setting JSON to false
	I0528 20:44:56.524134   27022 mustload.go:65] Loading cluster: ha-908878
	I0528 20:44:56.524171   27022 notify.go:220] Checking for updates...
	I0528 20:44:56.524529   27022 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:44:56.524545   27022 status.go:255] checking status of ha-908878 ...
	I0528 20:44:56.525138   27022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:44:56.525202   27022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:44:56.540841   27022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45159
	I0528 20:44:56.541343   27022 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:44:56.541923   27022 main.go:141] libmachine: Using API Version  1
	I0528 20:44:56.541946   27022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:44:56.542374   27022 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:44:56.542575   27022 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:44:56.544088   27022 status.go:330] ha-908878 host status = "Running" (err=<nil>)
	I0528 20:44:56.544110   27022 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:44:56.544495   27022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:44:56.544538   27022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:44:56.559269   27022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45777
	I0528 20:44:56.559618   27022 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:44:56.560016   27022 main.go:141] libmachine: Using API Version  1
	I0528 20:44:56.560033   27022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:44:56.560365   27022 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:44:56.560534   27022 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:44:56.563257   27022 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:44:56.563698   27022 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:44:56.563720   27022 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:44:56.563871   27022 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:44:56.564214   27022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:44:56.564265   27022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:44:56.578275   27022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42893
	I0528 20:44:56.578780   27022 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:44:56.579402   27022 main.go:141] libmachine: Using API Version  1
	I0528 20:44:56.579439   27022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:44:56.579749   27022 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:44:56.579918   27022 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:44:56.580153   27022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:44:56.580188   27022 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:44:56.582849   27022 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:44:56.583377   27022 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:44:56.583406   27022 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:44:56.583561   27022 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:44:56.583758   27022 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:44:56.583960   27022 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:44:56.584122   27022 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:44:56.673051   27022 ssh_runner.go:195] Run: systemctl --version
	I0528 20:44:56.680038   27022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:44:56.697596   27022 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:44:56.697621   27022 api_server.go:166] Checking apiserver status ...
	I0528 20:44:56.697668   27022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:44:56.712940   27022 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup
	W0528 20:44:56.723729   27022 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:44:56.723782   27022 ssh_runner.go:195] Run: ls
	I0528 20:44:56.729675   27022 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:44:56.733877   27022 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:44:56.733894   27022 status.go:422] ha-908878 apiserver status = Running (err=<nil>)
	I0528 20:44:56.733904   27022 status.go:257] ha-908878 status: &{Name:ha-908878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:44:56.733921   27022 status.go:255] checking status of ha-908878-m02 ...
	I0528 20:44:56.734233   27022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:44:56.734270   27022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:44:56.748616   27022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38413
	I0528 20:44:56.749013   27022 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:44:56.749451   27022 main.go:141] libmachine: Using API Version  1
	I0528 20:44:56.749471   27022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:44:56.749823   27022 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:44:56.749992   27022 main.go:141] libmachine: (ha-908878-m02) Calling .GetState
	I0528 20:44:56.751560   27022 status.go:330] ha-908878-m02 host status = "Running" (err=<nil>)
	I0528 20:44:56.751577   27022 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:44:56.751847   27022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:44:56.751876   27022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:44:56.765817   27022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39951
	I0528 20:44:56.766189   27022 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:44:56.766622   27022 main.go:141] libmachine: Using API Version  1
	I0528 20:44:56.766642   27022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:44:56.766938   27022 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:44:56.767135   27022 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:44:56.769885   27022 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:44:56.770460   27022 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:44:56.770481   27022 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:44:56.770607   27022 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:44:56.770900   27022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:44:56.770932   27022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:44:56.785991   27022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44759
	I0528 20:44:56.786358   27022 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:44:56.786832   27022 main.go:141] libmachine: Using API Version  1
	I0528 20:44:56.786852   27022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:44:56.787166   27022 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:44:56.787347   27022 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:44:56.787535   27022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:44:56.787553   27022 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:44:56.790804   27022 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:44:56.791368   27022 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:44:56.791396   27022 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:44:56.791533   27022 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:44:56.791660   27022 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:44:56.791823   27022 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:44:56.792089   27022 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	W0528 20:45:15.130014   27022 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.239:22: connect: no route to host
	W0528 20:45:15.130113   27022 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	E0528 20:45:15.130136   27022 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:15.130150   27022 status.go:257] ha-908878-m02 status: &{Name:ha-908878-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0528 20:45:15.130174   27022 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:15.130186   27022 status.go:255] checking status of ha-908878-m03 ...
	I0528 20:45:15.130511   27022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:15.130563   27022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:15.144995   27022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43671
	I0528 20:45:15.145413   27022 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:15.145866   27022 main.go:141] libmachine: Using API Version  1
	I0528 20:45:15.145890   27022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:15.146195   27022 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:15.146354   27022 main.go:141] libmachine: (ha-908878-m03) Calling .GetState
	I0528 20:45:15.148156   27022 status.go:330] ha-908878-m03 host status = "Running" (err=<nil>)
	I0528 20:45:15.148174   27022 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:15.148590   27022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:15.148644   27022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:15.162563   27022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42379
	I0528 20:45:15.162948   27022 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:15.163426   27022 main.go:141] libmachine: Using API Version  1
	I0528 20:45:15.163444   27022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:15.163720   27022 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:15.163881   27022 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:45:15.166618   27022 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:15.167089   27022 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:15.167115   27022 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:15.167221   27022 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:15.167509   27022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:15.167542   27022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:15.182435   27022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39659
	I0528 20:45:15.182811   27022 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:15.183341   27022 main.go:141] libmachine: Using API Version  1
	I0528 20:45:15.183368   27022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:15.183681   27022 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:15.183895   27022 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:45:15.184097   27022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:15.184134   27022 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:45:15.186880   27022 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:15.187302   27022 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:15.187326   27022 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:15.187449   27022 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:45:15.187599   27022 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:45:15.187733   27022 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:45:15.187862   27022 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:45:15.283493   27022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:15.305019   27022 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:45:15.305041   27022 api_server.go:166] Checking apiserver status ...
	I0528 20:45:15.305073   27022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:45:15.328125   27022 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup
	W0528 20:45:15.339217   27022 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:45:15.339310   27022 ssh_runner.go:195] Run: ls
	I0528 20:45:15.344214   27022 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:45:15.350359   27022 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:45:15.350379   27022 status.go:422] ha-908878-m03 apiserver status = Running (err=<nil>)
	I0528 20:45:15.350387   27022 status.go:257] ha-908878-m03 status: &{Name:ha-908878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:45:15.350400   27022 status.go:255] checking status of ha-908878-m04 ...
	I0528 20:45:15.350665   27022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:15.350706   27022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:15.366092   27022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38171
	I0528 20:45:15.366456   27022 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:15.366859   27022 main.go:141] libmachine: Using API Version  1
	I0528 20:45:15.366880   27022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:15.367175   27022 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:15.367326   27022 main.go:141] libmachine: (ha-908878-m04) Calling .GetState
	I0528 20:45:15.368838   27022 status.go:330] ha-908878-m04 host status = "Running" (err=<nil>)
	I0528 20:45:15.368855   27022 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:15.369233   27022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:15.369275   27022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:15.383255   27022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I0528 20:45:15.383602   27022 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:15.384096   27022 main.go:141] libmachine: Using API Version  1
	I0528 20:45:15.384117   27022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:15.384376   27022 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:15.384553   27022 main.go:141] libmachine: (ha-908878-m04) Calling .GetIP
	I0528 20:45:15.387105   27022 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:15.387516   27022 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:15.387548   27022 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:15.387689   27022 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:15.387960   27022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:15.387995   27022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:15.402195   27022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37067
	I0528 20:45:15.402597   27022 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:15.403062   27022 main.go:141] libmachine: Using API Version  1
	I0528 20:45:15.403081   27022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:15.403340   27022 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:15.403526   27022 main.go:141] libmachine: (ha-908878-m04) Calling .DriverName
	I0528 20:45:15.403701   27022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:15.403720   27022 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHHostname
	I0528 20:45:15.406164   27022 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:15.406658   27022 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:15.406687   27022 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:15.406848   27022 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHPort
	I0528 20:45:15.406999   27022 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHKeyPath
	I0528 20:45:15.407169   27022 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHUsername
	I0528 20:45:15.407329   27022 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m04/id_rsa Username:docker}
	I0528 20:45:15.494166   27022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:15.510374   27022 status.go:257] ha-908878-m04 status: &{Name:ha-908878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-908878 -n ha-908878
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-908878 logs -n 25: (1.394356673s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3657915045/001/cp-test_ha-908878-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878:/home/docker/cp-test_ha-908878-m03_ha-908878.txt                       |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878 sudo cat                                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m03_ha-908878.txt                                 |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m02:/home/docker/cp-test_ha-908878-m03_ha-908878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m02 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m03_ha-908878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04:/home/docker/cp-test_ha-908878-m03_ha-908878-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m04 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m03_ha-908878-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-908878 cp testdata/cp-test.txt                                                | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3657915045/001/cp-test_ha-908878-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878:/home/docker/cp-test_ha-908878-m04_ha-908878.txt                       |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878 sudo cat                                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m04_ha-908878.txt                                 |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m02:/home/docker/cp-test_ha-908878-m04_ha-908878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m02 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m04_ha-908878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03:/home/docker/cp-test_ha-908878-m04_ha-908878-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m03 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m04_ha-908878-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-908878 node stop m02 -v=7                                                     | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 20:38:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 20:38:28.508057   22579 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:38:28.508200   22579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:38:28.508213   22579 out.go:304] Setting ErrFile to fd 2...
	I0528 20:38:28.508220   22579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:38:28.508582   22579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:38:28.509131   22579 out.go:298] Setting JSON to false
	I0528 20:38:28.510023   22579 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1251,"bootTime":1716927457,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 20:38:28.510074   22579 start.go:139] virtualization: kvm guest
	I0528 20:38:28.512253   22579 out.go:177] * [ha-908878] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 20:38:28.513529   22579 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 20:38:28.514717   22579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 20:38:28.513504   22579 notify.go:220] Checking for updates...
	I0528 20:38:28.517192   22579 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:38:28.518516   22579 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:38:28.519639   22579 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 20:38:28.520794   22579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 20:38:28.521958   22579 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 20:38:28.555938   22579 out.go:177] * Using the kvm2 driver based on user configuration
	I0528 20:38:28.557171   22579 start.go:297] selected driver: kvm2
	I0528 20:38:28.557193   22579 start.go:901] validating driver "kvm2" against <nil>
	I0528 20:38:28.557210   22579 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 20:38:28.557907   22579 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:38:28.558002   22579 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 20:38:28.573789   22579 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 20:38:28.573849   22579 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 20:38:28.574069   22579 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:38:28.574138   22579 cni.go:84] Creating CNI manager for ""
	I0528 20:38:28.574154   22579 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0528 20:38:28.574161   22579 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0528 20:38:28.574233   22579 start.go:340] cluster config:
	{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0528 20:38:28.574344   22579 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:38:28.576870   22579 out.go:177] * Starting "ha-908878" primary control-plane node in "ha-908878" cluster
	I0528 20:38:28.578026   22579 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:38:28.578060   22579 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 20:38:28.578071   22579 cache.go:56] Caching tarball of preloaded images
	I0528 20:38:28.578129   22579 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 20:38:28.578140   22579 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 20:38:28.578409   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:38:28.578427   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json: {Name:mk828cc9c3416b68ca79835683bb9902a90d34c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:28.578562   22579 start.go:360] acquireMachinesLock for ha-908878: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 20:38:28.578588   22579 start.go:364] duration metric: took 14.265µs to acquireMachinesLock for "ha-908878"
	I0528 20:38:28.578604   22579 start.go:93] Provisioning new machine with config: &{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:38:28.578659   22579 start.go:125] createHost starting for "" (driver="kvm2")
	I0528 20:38:28.580191   22579 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 20:38:28.580315   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:38:28.580355   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:38:28.594111   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35167
	I0528 20:38:28.594491   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:38:28.595027   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:38:28.595052   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:38:28.595338   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:38:28.595499   22579 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:38:28.595664   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:28.595774   22579 start.go:159] libmachine.API.Create for "ha-908878" (driver="kvm2")
	I0528 20:38:28.595809   22579 client.go:168] LocalClient.Create starting
	I0528 20:38:28.595848   22579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem
	I0528 20:38:28.595882   22579 main.go:141] libmachine: Decoding PEM data...
	I0528 20:38:28.595899   22579 main.go:141] libmachine: Parsing certificate...
	I0528 20:38:28.595957   22579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem
	I0528 20:38:28.595982   22579 main.go:141] libmachine: Decoding PEM data...
	I0528 20:38:28.595996   22579 main.go:141] libmachine: Parsing certificate...
	I0528 20:38:28.596012   22579 main.go:141] libmachine: Running pre-create checks...
	I0528 20:38:28.596021   22579 main.go:141] libmachine: (ha-908878) Calling .PreCreateCheck
	I0528 20:38:28.596395   22579 main.go:141] libmachine: (ha-908878) Calling .GetConfigRaw
	I0528 20:38:28.596722   22579 main.go:141] libmachine: Creating machine...
	I0528 20:38:28.596740   22579 main.go:141] libmachine: (ha-908878) Calling .Create
	I0528 20:38:28.596844   22579 main.go:141] libmachine: (ha-908878) Creating KVM machine...
	I0528 20:38:28.597973   22579 main.go:141] libmachine: (ha-908878) DBG | found existing default KVM network
	I0528 20:38:28.598602   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:28.598479   22602 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0528 20:38:28.598613   22579 main.go:141] libmachine: (ha-908878) DBG | created network xml: 
	I0528 20:38:28.598627   22579 main.go:141] libmachine: (ha-908878) DBG | <network>
	I0528 20:38:28.598634   22579 main.go:141] libmachine: (ha-908878) DBG |   <name>mk-ha-908878</name>
	I0528 20:38:28.598640   22579 main.go:141] libmachine: (ha-908878) DBG |   <dns enable='no'/>
	I0528 20:38:28.598646   22579 main.go:141] libmachine: (ha-908878) DBG |   
	I0528 20:38:28.598655   22579 main.go:141] libmachine: (ha-908878) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0528 20:38:28.598662   22579 main.go:141] libmachine: (ha-908878) DBG |     <dhcp>
	I0528 20:38:28.598683   22579 main.go:141] libmachine: (ha-908878) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0528 20:38:28.598691   22579 main.go:141] libmachine: (ha-908878) DBG |     </dhcp>
	I0528 20:38:28.598718   22579 main.go:141] libmachine: (ha-908878) DBG |   </ip>
	I0528 20:38:28.598735   22579 main.go:141] libmachine: (ha-908878) DBG |   
	I0528 20:38:28.598744   22579 main.go:141] libmachine: (ha-908878) DBG | </network>
	I0528 20:38:28.598751   22579 main.go:141] libmachine: (ha-908878) DBG | 
	I0528 20:38:28.603635   22579 main.go:141] libmachine: (ha-908878) DBG | trying to create private KVM network mk-ha-908878 192.168.39.0/24...
	I0528 20:38:28.665930   22579 main.go:141] libmachine: (ha-908878) DBG | private KVM network mk-ha-908878 192.168.39.0/24 created
	I0528 20:38:28.665964   22579 main.go:141] libmachine: (ha-908878) Setting up store path in /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878 ...
	I0528 20:38:28.665977   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:28.665899   22602 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:38:28.665995   22579 main.go:141] libmachine: (ha-908878) Building disk image from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 20:38:28.666062   22579 main.go:141] libmachine: (ha-908878) Downloading /home/jenkins/minikube-integration/18966-3963/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 20:38:28.894340   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:28.894229   22602 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa...
	I0528 20:38:28.954571   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:28.954484   22602 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/ha-908878.rawdisk...
	I0528 20:38:28.954612   22579 main.go:141] libmachine: (ha-908878) DBG | Writing magic tar header
	I0528 20:38:28.954624   22579 main.go:141] libmachine: (ha-908878) DBG | Writing SSH key tar header
	I0528 20:38:28.954648   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:28.954607   22602 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878 ...
	I0528 20:38:28.954758   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878
	I0528 20:38:28.954782   22579 main.go:141] libmachine: (ha-908878) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878 (perms=drwx------)
	I0528 20:38:28.954790   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines
	I0528 20:38:28.954805   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:38:28.954816   22579 main.go:141] libmachine: (ha-908878) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines (perms=drwxr-xr-x)
	I0528 20:38:28.954829   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963
	I0528 20:38:28.954841   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0528 20:38:28.954849   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home/jenkins
	I0528 20:38:28.954855   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home
	I0528 20:38:28.954860   22579 main.go:141] libmachine: (ha-908878) DBG | Skipping /home - not owner
	I0528 20:38:28.954872   22579 main.go:141] libmachine: (ha-908878) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube (perms=drwxr-xr-x)
	I0528 20:38:28.954885   22579 main.go:141] libmachine: (ha-908878) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963 (perms=drwxrwxr-x)
	I0528 20:38:28.954902   22579 main.go:141] libmachine: (ha-908878) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0528 20:38:28.954915   22579 main.go:141] libmachine: (ha-908878) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0528 20:38:28.954922   22579 main.go:141] libmachine: (ha-908878) Creating domain...
	I0528 20:38:28.956073   22579 main.go:141] libmachine: (ha-908878) define libvirt domain using xml: 
	I0528 20:38:28.956095   22579 main.go:141] libmachine: (ha-908878) <domain type='kvm'>
	I0528 20:38:28.956100   22579 main.go:141] libmachine: (ha-908878)   <name>ha-908878</name>
	I0528 20:38:28.956105   22579 main.go:141] libmachine: (ha-908878)   <memory unit='MiB'>2200</memory>
	I0528 20:38:28.956110   22579 main.go:141] libmachine: (ha-908878)   <vcpu>2</vcpu>
	I0528 20:38:28.956117   22579 main.go:141] libmachine: (ha-908878)   <features>
	I0528 20:38:28.956122   22579 main.go:141] libmachine: (ha-908878)     <acpi/>
	I0528 20:38:28.956126   22579 main.go:141] libmachine: (ha-908878)     <apic/>
	I0528 20:38:28.956131   22579 main.go:141] libmachine: (ha-908878)     <pae/>
	I0528 20:38:28.956148   22579 main.go:141] libmachine: (ha-908878)     
	I0528 20:38:28.956161   22579 main.go:141] libmachine: (ha-908878)   </features>
	I0528 20:38:28.956168   22579 main.go:141] libmachine: (ha-908878)   <cpu mode='host-passthrough'>
	I0528 20:38:28.956178   22579 main.go:141] libmachine: (ha-908878)   
	I0528 20:38:28.956185   22579 main.go:141] libmachine: (ha-908878)   </cpu>
	I0528 20:38:28.956192   22579 main.go:141] libmachine: (ha-908878)   <os>
	I0528 20:38:28.956202   22579 main.go:141] libmachine: (ha-908878)     <type>hvm</type>
	I0528 20:38:28.956208   22579 main.go:141] libmachine: (ha-908878)     <boot dev='cdrom'/>
	I0528 20:38:28.956212   22579 main.go:141] libmachine: (ha-908878)     <boot dev='hd'/>
	I0528 20:38:28.956218   22579 main.go:141] libmachine: (ha-908878)     <bootmenu enable='no'/>
	I0528 20:38:28.956228   22579 main.go:141] libmachine: (ha-908878)   </os>
	I0528 20:38:28.956240   22579 main.go:141] libmachine: (ha-908878)   <devices>
	I0528 20:38:28.956257   22579 main.go:141] libmachine: (ha-908878)     <disk type='file' device='cdrom'>
	I0528 20:38:28.956272   22579 main.go:141] libmachine: (ha-908878)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/boot2docker.iso'/>
	I0528 20:38:28.956283   22579 main.go:141] libmachine: (ha-908878)       <target dev='hdc' bus='scsi'/>
	I0528 20:38:28.956294   22579 main.go:141] libmachine: (ha-908878)       <readonly/>
	I0528 20:38:28.956301   22579 main.go:141] libmachine: (ha-908878)     </disk>
	I0528 20:38:28.956329   22579 main.go:141] libmachine: (ha-908878)     <disk type='file' device='disk'>
	I0528 20:38:28.956355   22579 main.go:141] libmachine: (ha-908878)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0528 20:38:28.956372   22579 main.go:141] libmachine: (ha-908878)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/ha-908878.rawdisk'/>
	I0528 20:38:28.956383   22579 main.go:141] libmachine: (ha-908878)       <target dev='hda' bus='virtio'/>
	I0528 20:38:28.956396   22579 main.go:141] libmachine: (ha-908878)     </disk>
	I0528 20:38:28.956407   22579 main.go:141] libmachine: (ha-908878)     <interface type='network'>
	I0528 20:38:28.956427   22579 main.go:141] libmachine: (ha-908878)       <source network='mk-ha-908878'/>
	I0528 20:38:28.956443   22579 main.go:141] libmachine: (ha-908878)       <model type='virtio'/>
	I0528 20:38:28.956459   22579 main.go:141] libmachine: (ha-908878)     </interface>
	I0528 20:38:28.956475   22579 main.go:141] libmachine: (ha-908878)     <interface type='network'>
	I0528 20:38:28.956488   22579 main.go:141] libmachine: (ha-908878)       <source network='default'/>
	I0528 20:38:28.956499   22579 main.go:141] libmachine: (ha-908878)       <model type='virtio'/>
	I0528 20:38:28.956509   22579 main.go:141] libmachine: (ha-908878)     </interface>
	I0528 20:38:28.956516   22579 main.go:141] libmachine: (ha-908878)     <serial type='pty'>
	I0528 20:38:28.956527   22579 main.go:141] libmachine: (ha-908878)       <target port='0'/>
	I0528 20:38:28.956536   22579 main.go:141] libmachine: (ha-908878)     </serial>
	I0528 20:38:28.956555   22579 main.go:141] libmachine: (ha-908878)     <console type='pty'>
	I0528 20:38:28.956567   22579 main.go:141] libmachine: (ha-908878)       <target type='serial' port='0'/>
	I0528 20:38:28.956602   22579 main.go:141] libmachine: (ha-908878)     </console>
	I0528 20:38:28.956627   22579 main.go:141] libmachine: (ha-908878)     <rng model='virtio'>
	I0528 20:38:28.956637   22579 main.go:141] libmachine: (ha-908878)       <backend model='random'>/dev/random</backend>
	I0528 20:38:28.956695   22579 main.go:141] libmachine: (ha-908878)     </rng>
	I0528 20:38:28.956707   22579 main.go:141] libmachine: (ha-908878)     
	I0528 20:38:28.956714   22579 main.go:141] libmachine: (ha-908878)     
	I0528 20:38:28.956722   22579 main.go:141] libmachine: (ha-908878)   </devices>
	I0528 20:38:28.956733   22579 main.go:141] libmachine: (ha-908878) </domain>
	I0528 20:38:28.956742   22579 main.go:141] libmachine: (ha-908878) 
	I0528 20:38:28.960610   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:ea:b9:f9 in network default
	I0528 20:38:28.961119   22579 main.go:141] libmachine: (ha-908878) Ensuring networks are active...
	I0528 20:38:28.961134   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:28.961802   22579 main.go:141] libmachine: (ha-908878) Ensuring network default is active
	I0528 20:38:28.962108   22579 main.go:141] libmachine: (ha-908878) Ensuring network mk-ha-908878 is active
	I0528 20:38:28.962636   22579 main.go:141] libmachine: (ha-908878) Getting domain xml...
	I0528 20:38:28.963400   22579 main.go:141] libmachine: (ha-908878) Creating domain...
	I0528 20:38:30.122597   22579 main.go:141] libmachine: (ha-908878) Waiting to get IP...
	I0528 20:38:30.123378   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:30.123741   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:30.123764   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:30.123716   22602 retry.go:31] will retry after 239.467208ms: waiting for machine to come up
	I0528 20:38:30.365210   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:30.365776   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:30.365806   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:30.365717   22602 retry.go:31] will retry after 260.357194ms: waiting for machine to come up
	I0528 20:38:30.627156   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:30.627558   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:30.627587   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:30.627511   22602 retry.go:31] will retry after 315.484937ms: waiting for machine to come up
	I0528 20:38:30.944936   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:30.945401   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:30.945419   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:30.945362   22602 retry.go:31] will retry after 403.722417ms: waiting for machine to come up
	I0528 20:38:31.351165   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:31.351582   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:31.351618   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:31.351558   22602 retry.go:31] will retry after 705.789161ms: waiting for machine to come up
	I0528 20:38:32.058483   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:32.058911   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:32.058938   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:32.058845   22602 retry.go:31] will retry after 853.06609ms: waiting for machine to come up
	I0528 20:38:32.913390   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:32.913788   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:32.913830   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:32.913698   22602 retry.go:31] will retry after 930.199316ms: waiting for machine to come up
	I0528 20:38:33.845161   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:33.845714   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:33.845753   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:33.845660   22602 retry.go:31] will retry after 1.45078343s: waiting for machine to come up
	I0528 20:38:35.298107   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:35.298584   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:35.298611   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:35.298533   22602 retry.go:31] will retry after 1.507467761s: waiting for machine to come up
	I0528 20:38:36.808111   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:36.808497   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:36.808519   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:36.808461   22602 retry.go:31] will retry after 1.96576782s: waiting for machine to come up
	I0528 20:38:38.775422   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:38.775838   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:38.775867   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:38.775781   22602 retry.go:31] will retry after 1.786105039s: waiting for machine to come up
	I0528 20:38:40.564563   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:40.564971   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:40.565005   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:40.564941   22602 retry.go:31] will retry after 3.177899355s: waiting for machine to come up
	I0528 20:38:43.744675   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:43.745084   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:43.745107   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:43.745033   22602 retry.go:31] will retry after 4.318254436s: waiting for machine to come up
	I0528 20:38:48.064298   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.064765   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has current primary IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.064795   22579 main.go:141] libmachine: (ha-908878) Found IP for machine: 192.168.39.100
	I0528 20:38:48.064809   22579 main.go:141] libmachine: (ha-908878) Reserving static IP address...
	I0528 20:38:48.065123   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find host DHCP lease matching {name: "ha-908878", mac: "52:54:00:bc:73:cb", ip: "192.168.39.100"} in network mk-ha-908878
	I0528 20:38:48.136166   22579 main.go:141] libmachine: (ha-908878) DBG | Getting to WaitForSSH function...
	I0528 20:38:48.136194   22579 main.go:141] libmachine: (ha-908878) Reserved static IP address: 192.168.39.100
	I0528 20:38:48.136255   22579 main.go:141] libmachine: (ha-908878) Waiting for SSH to be available...
	I0528 20:38:48.138625   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.139099   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.139124   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.139358   22579 main.go:141] libmachine: (ha-908878) DBG | Using SSH client type: external
	I0528 20:38:48.139388   22579 main.go:141] libmachine: (ha-908878) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa (-rw-------)
	I0528 20:38:48.139441   22579 main.go:141] libmachine: (ha-908878) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 20:38:48.139460   22579 main.go:141] libmachine: (ha-908878) DBG | About to run SSH command:
	I0528 20:38:48.139480   22579 main.go:141] libmachine: (ha-908878) DBG | exit 0
	I0528 20:38:48.265512   22579 main.go:141] libmachine: (ha-908878) DBG | SSH cmd err, output: <nil>: 
	I0528 20:38:48.265775   22579 main.go:141] libmachine: (ha-908878) KVM machine creation complete!
	I0528 20:38:48.266075   22579 main.go:141] libmachine: (ha-908878) Calling .GetConfigRaw
	I0528 20:38:48.266535   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:48.266734   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:48.266881   22579 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0528 20:38:48.266894   22579 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:38:48.268080   22579 main.go:141] libmachine: Detecting operating system of created instance...
	I0528 20:38:48.268092   22579 main.go:141] libmachine: Waiting for SSH to be available...
	I0528 20:38:48.268102   22579 main.go:141] libmachine: Getting to WaitForSSH function...
	I0528 20:38:48.268108   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:48.270260   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.270559   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.270598   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.270668   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:48.270813   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.270951   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.271067   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:48.271194   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:38:48.271358   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:38:48.271369   22579 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0528 20:38:48.376611   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:38:48.376634   22579 main.go:141] libmachine: Detecting the provisioner...
	I0528 20:38:48.376643   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:48.379304   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.379651   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.379684   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.379771   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:48.379955   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.380110   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.380271   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:48.380435   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:38:48.380644   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:38:48.380661   22579 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0528 20:38:48.489958   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0528 20:38:48.490049   22579 main.go:141] libmachine: found compatible host: buildroot
	I0528 20:38:48.490065   22579 main.go:141] libmachine: Provisioning with buildroot...
	I0528 20:38:48.490077   22579 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:38:48.490291   22579 buildroot.go:166] provisioning hostname "ha-908878"
	I0528 20:38:48.490314   22579 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:38:48.490462   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:48.492870   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.493158   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.493196   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.493290   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:48.493469   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.493622   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.493772   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:48.493895   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:38:48.494099   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:38:48.494115   22579 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-908878 && echo "ha-908878" | sudo tee /etc/hostname
	I0528 20:38:48.615192   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-908878
	
	I0528 20:38:48.615213   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:48.617637   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.617972   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.617998   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.618145   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:48.618340   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.618503   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.618640   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:48.618779   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:38:48.618918   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:38:48.618933   22579 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-908878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-908878/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-908878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 20:38:48.733892   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:38:48.733916   22579 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 20:38:48.733945   22579 buildroot.go:174] setting up certificates
	I0528 20:38:48.733958   22579 provision.go:84] configureAuth start
	I0528 20:38:48.733974   22579 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:38:48.734211   22579 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:38:48.736486   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.736765   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.736787   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.736920   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:48.738949   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.739282   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.739306   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.739421   22579 provision.go:143] copyHostCerts
	I0528 20:38:48.739452   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:38:48.739482   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 20:38:48.739494   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:38:48.739554   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 20:38:48.739634   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:38:48.739651   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 20:38:48.739657   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:38:48.739681   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 20:38:48.739732   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:38:48.739753   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 20:38:48.739760   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:38:48.739780   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 20:38:48.739835   22579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.ha-908878 san=[127.0.0.1 192.168.39.100 ha-908878 localhost minikube]
	I0528 20:38:48.984696   22579 provision.go:177] copyRemoteCerts
	I0528 20:38:48.984750   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 20:38:48.984771   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:48.987414   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.987713   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.987737   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.987932   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:48.988125   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.988391   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:48.988533   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:38:49.075941   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0528 20:38:49.075995   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 20:38:49.099179   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0528 20:38:49.099223   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0528 20:38:49.121756   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0528 20:38:49.121819   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 20:38:49.144028   22579 provision.go:87] duration metric: took 410.05864ms to configureAuth
	I0528 20:38:49.144046   22579 buildroot.go:189] setting minikube options for container-runtime
	I0528 20:38:49.144200   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:38:49.144289   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:49.146775   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.147067   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.147090   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.147223   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:49.147410   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.147585   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.147711   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:49.147880   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:38:49.148087   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:38:49.148114   22579 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 20:38:49.420792   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 20:38:49.420821   22579 main.go:141] libmachine: Checking connection to Docker...
	I0528 20:38:49.420831   22579 main.go:141] libmachine: (ha-908878) Calling .GetURL
	I0528 20:38:49.422176   22579 main.go:141] libmachine: (ha-908878) DBG | Using libvirt version 6000000
	I0528 20:38:49.424073   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.424362   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.424394   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.424516   22579 main.go:141] libmachine: Docker is up and running!
	I0528 20:38:49.424531   22579 main.go:141] libmachine: Reticulating splines...
	I0528 20:38:49.424539   22579 client.go:171] duration metric: took 20.828718668s to LocalClient.Create
	I0528 20:38:49.424566   22579 start.go:167] duration metric: took 20.828790777s to libmachine.API.Create "ha-908878"
	I0528 20:38:49.424578   22579 start.go:293] postStartSetup for "ha-908878" (driver="kvm2")
	I0528 20:38:49.424592   22579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 20:38:49.424614   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:49.424841   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 20:38:49.424861   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:49.426765   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.427217   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.427240   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.427340   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:49.427485   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.427633   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:49.427818   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:38:49.511709   22579 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 20:38:49.515889   22579 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 20:38:49.515913   22579 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 20:38:49.515977   22579 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 20:38:49.516088   22579 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 20:38:49.516100   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /etc/ssl/certs/117602.pem
	I0528 20:38:49.516215   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 20:38:49.525404   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:38:49.547425   22579 start.go:296] duration metric: took 122.835572ms for postStartSetup
	I0528 20:38:49.547461   22579 main.go:141] libmachine: (ha-908878) Calling .GetConfigRaw
	I0528 20:38:49.547931   22579 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:38:49.551167   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.551493   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.551517   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.551723   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:38:49.551870   22579 start.go:128] duration metric: took 20.973203625s to createHost
	I0528 20:38:49.551889   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:49.553803   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.554072   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.554099   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.554191   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:49.554357   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.554512   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.554648   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:49.554804   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:38:49.554956   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:38:49.554966   22579 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 20:38:49.662123   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716928729.634172717
	
	I0528 20:38:49.662140   22579 fix.go:216] guest clock: 1716928729.634172717
	I0528 20:38:49.662147   22579 fix.go:229] Guest: 2024-05-28 20:38:49.634172717 +0000 UTC Remote: 2024-05-28 20:38:49.551880955 +0000 UTC m=+21.076168656 (delta=82.291762ms)
	I0528 20:38:49.662164   22579 fix.go:200] guest clock delta is within tolerance: 82.291762ms
	I0528 20:38:49.662169   22579 start.go:83] releasing machines lock for "ha-908878", held for 21.083572545s
	I0528 20:38:49.662183   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:49.662408   22579 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:38:49.664697   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.665028   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.665052   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.665198   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:49.665658   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:49.665868   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:49.665963   22579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 20:38:49.666008   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:49.666112   22579 ssh_runner.go:195] Run: cat /version.json
	I0528 20:38:49.666135   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:49.668578   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.668711   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.668899   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.668918   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.669027   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:49.669166   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.669173   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.669192   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.669306   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:49.669371   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:49.669454   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:38:49.669528   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.669654   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:49.669823   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:38:49.785714   22579 ssh_runner.go:195] Run: systemctl --version
	I0528 20:38:49.791431   22579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 20:38:49.946535   22579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 20:38:49.952778   22579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 20:38:49.952841   22579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 20:38:49.967958   22579 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 20:38:49.967974   22579 start.go:494] detecting cgroup driver to use...
	I0528 20:38:49.968032   22579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 20:38:49.983154   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 20:38:49.996248   22579 docker.go:217] disabling cri-docker service (if available) ...
	I0528 20:38:49.996292   22579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 20:38:50.009245   22579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 20:38:50.021833   22579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 20:38:50.132329   22579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 20:38:50.281366   22579 docker.go:233] disabling docker service ...
	I0528 20:38:50.281445   22579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 20:38:50.295507   22579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 20:38:50.308570   22579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 20:38:50.425719   22579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 20:38:50.542751   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 20:38:50.556721   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 20:38:50.574447   22579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 20:38:50.574511   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.584319   22579 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 20:38:50.584363   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.594409   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.604233   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.614035   22579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 20:38:50.624113   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.633783   22579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.650029   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.659849   22579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 20:38:50.668562   22579 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 20:38:50.668594   22579 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 20:38:50.680820   22579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 20:38:50.690010   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:38:50.803010   22579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 20:38:50.931454   22579 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 20:38:50.931531   22579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 20:38:50.936715   22579 start.go:562] Will wait 60s for crictl version
	I0528 20:38:50.936767   22579 ssh_runner.go:195] Run: which crictl
	I0528 20:38:50.940639   22579 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 20:38:50.978739   22579 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 20:38:50.978812   22579 ssh_runner.go:195] Run: crio --version
	I0528 20:38:51.005021   22579 ssh_runner.go:195] Run: crio --version
	I0528 20:38:51.035112   22579 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 20:38:51.036486   22579 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:38:51.038790   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:51.039119   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:51.039140   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:51.039303   22579 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 20:38:51.043414   22579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:38:51.056018   22579 kubeadm.go:877] updating cluster {Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 20:38:51.056109   22579 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:38:51.056147   22579 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 20:38:51.087184   22579 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0528 20:38:51.087233   22579 ssh_runner.go:195] Run: which lz4
	I0528 20:38:51.091162   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0528 20:38:51.091273   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 20:38:51.095372   22579 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 20:38:51.095400   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0528 20:38:52.434056   22579 crio.go:462] duration metric: took 1.342826793s to copy over tarball
	I0528 20:38:52.434148   22579 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 20:38:54.508765   22579 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.074580937s)
	I0528 20:38:54.508794   22579 crio.go:469] duration metric: took 2.074713225s to extract the tarball
	I0528 20:38:54.508800   22579 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 20:38:54.545376   22579 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 20:38:54.588637   22579 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 20:38:54.588657   22579 cache_images.go:84] Images are preloaded, skipping loading
	I0528 20:38:54.588664   22579 kubeadm.go:928] updating node { 192.168.39.100 8443 v1.30.1 crio true true} ...
	I0528 20:38:54.588754   22579 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-908878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 20:38:54.588815   22579 ssh_runner.go:195] Run: crio config
	I0528 20:38:54.642509   22579 cni.go:84] Creating CNI manager for ""
	I0528 20:38:54.642526   22579 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0528 20:38:54.642535   22579 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 20:38:54.642553   22579 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-908878 NodeName:ha-908878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 20:38:54.642666   22579 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-908878"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 20:38:54.642687   22579 kube-vip.go:115] generating kube-vip config ...
	I0528 20:38:54.642725   22579 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 20:38:54.660351   22579 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 20:38:54.660473   22579 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0528 20:38:54.660537   22579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 20:38:54.670336   22579 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 20:38:54.670394   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0528 20:38:54.679560   22579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0528 20:38:54.695475   22579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 20:38:54.710820   22579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0528 20:38:54.726283   22579 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0528 20:38:54.742192   22579 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0528 20:38:54.745729   22579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:38:54.757819   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:38:54.876320   22579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:38:54.892785   22579 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878 for IP: 192.168.39.100
	I0528 20:38:54.892803   22579 certs.go:194] generating shared ca certs ...
	I0528 20:38:54.892817   22579 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:54.892971   22579 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 20:38:54.893009   22579 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 20:38:54.893019   22579 certs.go:256] generating profile certs ...
	I0528 20:38:54.893061   22579 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key
	I0528 20:38:54.893074   22579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.crt with IP's: []
	I0528 20:38:54.965324   22579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.crt ...
	I0528 20:38:54.965348   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.crt: {Name:mk04662cee3162313797f69f105fd22fa987f6b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:54.965538   22579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key ...
	I0528 20:38:54.965553   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key: {Name:mk1af1e1f86c54769b7fe70d345e0cd7ccf018c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:54.965633   22579 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.c4f31d45
	I0528 20:38:54.965648   22579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.c4f31d45 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.254]
	I0528 20:38:55.548317   22579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.c4f31d45 ...
	I0528 20:38:55.548343   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.c4f31d45: {Name:mkd40d2038fb3fdfc8b37af76ff3afaefb2368e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:55.548513   22579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.c4f31d45 ...
	I0528 20:38:55.548530   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.c4f31d45: {Name:mk8b133081a94b50973c4cf69bd7e8393e52a09c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:55.548630   22579 certs.go:381] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.c4f31d45 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt
	I0528 20:38:55.548718   22579 certs.go:385] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.c4f31d45 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key
	I0528 20:38:55.548778   22579 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key
	I0528 20:38:55.548794   22579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt with IP's: []
	I0528 20:38:55.595371   22579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt ...
	I0528 20:38:55.595395   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt: {Name:mk74e6fe33213c1f2ad92f1d4eda4579c8e53eec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:55.595538   22579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key ...
	I0528 20:38:55.595551   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key: {Name:mk5dd9209bc6457e3b260fb1bf0944035f78220d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:55.595638   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 20:38:55.595656   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0528 20:38:55.595668   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 20:38:55.595680   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 20:38:55.595690   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 20:38:55.595702   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 20:38:55.595711   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 20:38:55.595723   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 20:38:55.595804   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 20:38:55.595841   22579 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 20:38:55.595851   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 20:38:55.595870   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 20:38:55.595895   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 20:38:55.595915   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 20:38:55.595958   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:38:55.595983   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:38:55.595996   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem -> /usr/share/ca-certificates/11760.pem
	I0528 20:38:55.596005   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /usr/share/ca-certificates/117602.pem
	I0528 20:38:55.596498   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 20:38:55.622611   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 20:38:55.646231   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 20:38:55.674555   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 20:38:55.703161   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 20:38:55.725075   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 20:38:55.748465   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 20:38:55.771076   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 20:38:55.793745   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 20:38:55.816868   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 20:38:55.839445   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 20:38:55.867401   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 20:38:55.886462   22579 ssh_runner.go:195] Run: openssl version
	I0528 20:38:55.892252   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 20:38:55.904312   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:38:55.908752   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:38:55.908798   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:38:55.914480   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 20:38:55.925428   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 20:38:55.935860   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 20:38:55.940145   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 20:38:55.940189   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 20:38:55.945611   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 20:38:55.955927   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 20:38:55.966605   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 20:38:55.971093   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 20:38:55.971135   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 20:38:55.976609   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 20:38:55.987317   22579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 20:38:55.991405   22579 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 20:38:55.991462   22579 kubeadm.go:391] StartCluster: {Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:38:55.991550   22579 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 20:38:55.991591   22579 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 20:38:56.031545   22579 cri.go:89] found id: ""
	I0528 20:38:56.031606   22579 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 20:38:56.041726   22579 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 20:38:56.051499   22579 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 20:38:56.060959   22579 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 20:38:56.060979   22579 kubeadm.go:156] found existing configuration files:
	
	I0528 20:38:56.061011   22579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 20:38:56.069989   22579 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 20:38:56.070041   22579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 20:38:56.079293   22579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 20:38:56.088136   22579 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 20:38:56.088181   22579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 20:38:56.097481   22579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 20:38:56.106289   22579 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 20:38:56.106338   22579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 20:38:56.115374   22579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 20:38:56.123980   22579 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 20:38:56.124028   22579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 20:38:56.133084   22579 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 20:38:56.366487   22579 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 20:39:07.836695   22579 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 20:39:07.836768   22579 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 20:39:07.836865   22579 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 20:39:07.836983   22579 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 20:39:07.837059   22579 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 20:39:07.837113   22579 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 20:39:07.838580   22579 out.go:204]   - Generating certificates and keys ...
	I0528 20:39:07.838648   22579 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 20:39:07.838697   22579 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 20:39:07.838755   22579 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 20:39:07.838808   22579 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 20:39:07.838882   22579 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 20:39:07.838932   22579 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 20:39:07.838985   22579 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 20:39:07.839092   22579 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-908878 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0528 20:39:07.839149   22579 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 20:39:07.839246   22579 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-908878 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0528 20:39:07.839334   22579 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 20:39:07.839398   22579 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 20:39:07.839441   22579 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 20:39:07.839488   22579 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 20:39:07.839532   22579 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 20:39:07.839579   22579 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 20:39:07.839633   22579 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 20:39:07.839683   22579 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 20:39:07.839730   22579 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 20:39:07.839799   22579 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 20:39:07.839878   22579 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 20:39:07.841193   22579 out.go:204]   - Booting up control plane ...
	I0528 20:39:07.841281   22579 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 20:39:07.841367   22579 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 20:39:07.841447   22579 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 20:39:07.841549   22579 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 20:39:07.841628   22579 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 20:39:07.841662   22579 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 20:39:07.841787   22579 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 20:39:07.841875   22579 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 20:39:07.841934   22579 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.265431ms
	I0528 20:39:07.842012   22579 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 20:39:07.842071   22579 kubeadm.go:309] [api-check] The API server is healthy after 6.025489101s
	I0528 20:39:07.842204   22579 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 20:39:07.842390   22579 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 20:39:07.842474   22579 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 20:39:07.842705   22579 kubeadm.go:309] [mark-control-plane] Marking the node ha-908878 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 20:39:07.842792   22579 kubeadm.go:309] [bootstrap-token] Using token: yh74jr.5twmrsgoggpczbdk
	I0528 20:39:07.843965   22579 out.go:204]   - Configuring RBAC rules ...
	I0528 20:39:07.844050   22579 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 20:39:07.844154   22579 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 20:39:07.844309   22579 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 20:39:07.844453   22579 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 20:39:07.844570   22579 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 20:39:07.844675   22579 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 20:39:07.844830   22579 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 20:39:07.844891   22579 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 20:39:07.844951   22579 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 20:39:07.844965   22579 kubeadm.go:309] 
	I0528 20:39:07.845031   22579 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 20:39:07.845040   22579 kubeadm.go:309] 
	I0528 20:39:07.845125   22579 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 20:39:07.845131   22579 kubeadm.go:309] 
	I0528 20:39:07.845180   22579 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 20:39:07.845271   22579 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 20:39:07.845353   22579 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 20:39:07.845362   22579 kubeadm.go:309] 
	I0528 20:39:07.845423   22579 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 20:39:07.845429   22579 kubeadm.go:309] 
	I0528 20:39:07.845467   22579 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 20:39:07.845476   22579 kubeadm.go:309] 
	I0528 20:39:07.845522   22579 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 20:39:07.845584   22579 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 20:39:07.845648   22579 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 20:39:07.845656   22579 kubeadm.go:309] 
	I0528 20:39:07.845728   22579 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 20:39:07.845814   22579 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 20:39:07.845821   22579 kubeadm.go:309] 
	I0528 20:39:07.845896   22579 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yh74jr.5twmrsgoggpczbdk \
	I0528 20:39:07.846025   22579 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb \
	I0528 20:39:07.846048   22579 kubeadm.go:309] 	--control-plane 
	I0528 20:39:07.846065   22579 kubeadm.go:309] 
	I0528 20:39:07.846141   22579 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 20:39:07.846151   22579 kubeadm.go:309] 
	I0528 20:39:07.846223   22579 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yh74jr.5twmrsgoggpczbdk \
	I0528 20:39:07.846331   22579 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb 
	I0528 20:39:07.846341   22579 cni.go:84] Creating CNI manager for ""
	I0528 20:39:07.846345   22579 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0528 20:39:07.847684   22579 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0528 20:39:07.848691   22579 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0528 20:39:07.854109   22579 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0528 20:39:07.854122   22579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0528 20:39:07.872861   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0528 20:39:08.334574   22579 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 20:39:08.334700   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:08.334736   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-908878 minikube.k8s.io/updated_at=2024_05_28T20_39_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=ha-908878 minikube.k8s.io/primary=true
	I0528 20:39:08.367772   22579 ops.go:34] apiserver oom_adj: -16
	I0528 20:39:08.507693   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:09.008762   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:09.507970   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:10.008494   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:10.508329   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:11.008607   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:11.507714   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:12.008450   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:12.508428   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:13.008496   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:13.508160   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:14.007992   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:14.508296   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:15.007817   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:15.508668   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:16.008011   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:16.508287   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:17.007921   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:17.508586   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:18.008150   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:18.507863   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:19.007850   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:19.507792   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:20.008765   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:20.508271   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:20.597881   22579 kubeadm.go:1107] duration metric: took 12.26324806s to wait for elevateKubeSystemPrivileges
	W0528 20:39:20.597925   22579 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 20:39:20.597934   22579 kubeadm.go:393] duration metric: took 24.606476573s to StartCluster
	I0528 20:39:20.597951   22579 settings.go:142] acquiring lock: {Name:mkc1182212d780ab38539839372c25eb989b2e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:39:20.598029   22579 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:39:20.598869   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:39:20.599107   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0528 20:39:20.599112   22579 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:39:20.599137   22579 start.go:240] waiting for startup goroutines ...
	I0528 20:39:20.599144   22579 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 20:39:20.599219   22579 addons.go:69] Setting storage-provisioner=true in profile "ha-908878"
	I0528 20:39:20.599239   22579 addons.go:69] Setting default-storageclass=true in profile "ha-908878"
	I0528 20:39:20.599253   22579 addons.go:234] Setting addon storage-provisioner=true in "ha-908878"
	I0528 20:39:20.599263   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:39:20.599279   22579 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:39:20.599269   22579 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-908878"
	I0528 20:39:20.599630   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:20.599660   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:20.599662   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:20.599685   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:20.614397   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I0528 20:39:20.614413   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40303
	I0528 20:39:20.614823   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:20.614877   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:20.615282   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:20.615301   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:20.615408   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:20.615433   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:20.615641   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:20.615774   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:20.615946   22579 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:39:20.616182   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:20.616214   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:20.618109   22579 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:39:20.618459   22579 kapi.go:59] client config for ha-908878: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.crt", KeyFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key", CAFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf8220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 20:39:20.618973   22579 cert_rotation.go:137] Starting client certificate rotation controller
	I0528 20:39:20.619180   22579 addons.go:234] Setting addon default-storageclass=true in "ha-908878"
	I0528 20:39:20.619228   22579 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:39:20.619583   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:20.619614   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:20.630882   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
	I0528 20:39:20.631271   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:20.631716   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:20.631732   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:20.632083   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:20.632316   22579 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:39:20.633946   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:39:20.633974   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0528 20:39:20.636312   22579 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 20:39:20.634361   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:20.637705   22579 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 20:39:20.637724   22579 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 20:39:20.637742   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:39:20.638109   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:20.638135   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:20.638472   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:20.639030   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:20.639073   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:20.640850   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:20.641225   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:39:20.641262   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:20.641389   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:39:20.641581   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:39:20.641716   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:39:20.641865   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:39:20.654398   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36505
	I0528 20:39:20.654781   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:20.655238   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:20.655262   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:20.655554   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:20.655753   22579 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:39:20.657284   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:39:20.657482   22579 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 20:39:20.657496   22579 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 20:39:20.657509   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:39:20.660368   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:20.660834   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:39:20.660861   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:20.660947   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:39:20.661127   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:39:20.661288   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:39:20.661464   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:39:20.691834   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0528 20:39:20.777748   22579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 20:39:20.813678   22579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 20:39:20.977077   22579 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0528 20:39:21.384440   22579 main.go:141] libmachine: Making call to close driver server
	I0528 20:39:21.384468   22579 main.go:141] libmachine: (ha-908878) Calling .Close
	I0528 20:39:21.384470   22579 main.go:141] libmachine: Making call to close driver server
	I0528 20:39:21.384481   22579 main.go:141] libmachine: (ha-908878) Calling .Close
	I0528 20:39:21.384758   22579 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:39:21.384776   22579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:39:21.384785   22579 main.go:141] libmachine: Making call to close driver server
	I0528 20:39:21.384793   22579 main.go:141] libmachine: (ha-908878) Calling .Close
	I0528 20:39:21.384800   22579 main.go:141] libmachine: (ha-908878) DBG | Closing plugin on server side
	I0528 20:39:21.384758   22579 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:39:21.384831   22579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:39:21.384840   22579 main.go:141] libmachine: Making call to close driver server
	I0528 20:39:21.384848   22579 main.go:141] libmachine: (ha-908878) Calling .Close
	I0528 20:39:21.385038   22579 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:39:21.385052   22579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:39:21.385064   22579 main.go:141] libmachine: (ha-908878) DBG | Closing plugin on server side
	I0528 20:39:21.385212   22579 main.go:141] libmachine: (ha-908878) DBG | Closing plugin on server side
	I0528 20:39:21.385234   22579 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:39:21.385247   22579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:39:21.385365   22579 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0528 20:39:21.385412   22579 round_trippers.go:469] Request Headers:
	I0528 20:39:21.385426   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:39:21.385432   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:39:21.398255   22579 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0528 20:39:21.398756   22579 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0528 20:39:21.398769   22579 round_trippers.go:469] Request Headers:
	I0528 20:39:21.398776   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:39:21.398779   22579 round_trippers.go:473]     Content-Type: application/json
	I0528 20:39:21.398782   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:39:21.403303   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:39:21.403439   22579 main.go:141] libmachine: Making call to close driver server
	I0528 20:39:21.403453   22579 main.go:141] libmachine: (ha-908878) Calling .Close
	I0528 20:39:21.403725   22579 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:39:21.403738   22579 main.go:141] libmachine: (ha-908878) DBG | Closing plugin on server side
	I0528 20:39:21.403744   22579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:39:21.406222   22579 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0528 20:39:21.407409   22579 addons.go:510] duration metric: took 808.260358ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0528 20:39:21.407451   22579 start.go:245] waiting for cluster config update ...
	I0528 20:39:21.407469   22579 start.go:254] writing updated cluster config ...
	I0528 20:39:21.409022   22579 out.go:177] 
	I0528 20:39:21.410317   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:39:21.410381   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:39:21.412048   22579 out.go:177] * Starting "ha-908878-m02" control-plane node in "ha-908878" cluster
	I0528 20:39:21.413146   22579 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:39:21.413164   22579 cache.go:56] Caching tarball of preloaded images
	I0528 20:39:21.413243   22579 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 20:39:21.413255   22579 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 20:39:21.413312   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:39:21.413455   22579 start.go:360] acquireMachinesLock for ha-908878-m02: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 20:39:21.413499   22579 start.go:364] duration metric: took 26.01µs to acquireMachinesLock for "ha-908878-m02"
	I0528 20:39:21.413522   22579 start.go:93] Provisioning new machine with config: &{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:39:21.413616   22579 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0528 20:39:21.415020   22579 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 20:39:21.415087   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:21.415108   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:21.429900   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37587
	I0528 20:39:21.430248   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:21.430757   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:21.430776   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:21.431054   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:21.431263   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetMachineName
	I0528 20:39:21.431412   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:21.431549   22579 start.go:159] libmachine.API.Create for "ha-908878" (driver="kvm2")
	I0528 20:39:21.431575   22579 client.go:168] LocalClient.Create starting
	I0528 20:39:21.431606   22579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem
	I0528 20:39:21.431640   22579 main.go:141] libmachine: Decoding PEM data...
	I0528 20:39:21.431654   22579 main.go:141] libmachine: Parsing certificate...
	I0528 20:39:21.431700   22579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem
	I0528 20:39:21.431717   22579 main.go:141] libmachine: Decoding PEM data...
	I0528 20:39:21.431727   22579 main.go:141] libmachine: Parsing certificate...
	I0528 20:39:21.431747   22579 main.go:141] libmachine: Running pre-create checks...
	I0528 20:39:21.431754   22579 main.go:141] libmachine: (ha-908878-m02) Calling .PreCreateCheck
	I0528 20:39:21.431906   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetConfigRaw
	I0528 20:39:21.432269   22579 main.go:141] libmachine: Creating machine...
	I0528 20:39:21.432284   22579 main.go:141] libmachine: (ha-908878-m02) Calling .Create
	I0528 20:39:21.432407   22579 main.go:141] libmachine: (ha-908878-m02) Creating KVM machine...
	I0528 20:39:21.433443   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found existing default KVM network
	I0528 20:39:21.433607   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found existing private KVM network mk-ha-908878
	I0528 20:39:21.433790   22579 main.go:141] libmachine: (ha-908878-m02) Setting up store path in /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02 ...
	I0528 20:39:21.433816   22579 main.go:141] libmachine: (ha-908878-m02) Building disk image from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 20:39:21.433833   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:21.433728   22978 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:39:21.433933   22579 main.go:141] libmachine: (ha-908878-m02) Downloading /home/jenkins/minikube-integration/18966-3963/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 20:39:21.651560   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:21.651450   22978 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa...
	I0528 20:39:21.796305   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:21.796147   22978 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/ha-908878-m02.rawdisk...
	I0528 20:39:21.796343   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Writing magic tar header
	I0528 20:39:21.796358   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Writing SSH key tar header
	I0528 20:39:21.796479   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:21.796391   22978 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02 ...
	I0528 20:39:21.796538   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02
	I0528 20:39:21.796560   22579 main.go:141] libmachine: (ha-908878-m02) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02 (perms=drwx------)
	I0528 20:39:21.796577   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines
	I0528 20:39:21.796612   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:39:21.796626   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963
	I0528 20:39:21.796638   22579 main.go:141] libmachine: (ha-908878-m02) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines (perms=drwxr-xr-x)
	I0528 20:39:21.796649   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0528 20:39:21.796660   22579 main.go:141] libmachine: (ha-908878-m02) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube (perms=drwxr-xr-x)
	I0528 20:39:21.796668   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home/jenkins
	I0528 20:39:21.796676   22579 main.go:141] libmachine: (ha-908878-m02) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963 (perms=drwxrwxr-x)
	I0528 20:39:21.796687   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home
	I0528 20:39:21.796702   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Skipping /home - not owner
	I0528 20:39:21.796718   22579 main.go:141] libmachine: (ha-908878-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0528 20:39:21.796731   22579 main.go:141] libmachine: (ha-908878-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0528 20:39:21.796743   22579 main.go:141] libmachine: (ha-908878-m02) Creating domain...
	I0528 20:39:21.797858   22579 main.go:141] libmachine: (ha-908878-m02) define libvirt domain using xml: 
	I0528 20:39:21.797872   22579 main.go:141] libmachine: (ha-908878-m02) <domain type='kvm'>
	I0528 20:39:21.797879   22579 main.go:141] libmachine: (ha-908878-m02)   <name>ha-908878-m02</name>
	I0528 20:39:21.797884   22579 main.go:141] libmachine: (ha-908878-m02)   <memory unit='MiB'>2200</memory>
	I0528 20:39:21.797889   22579 main.go:141] libmachine: (ha-908878-m02)   <vcpu>2</vcpu>
	I0528 20:39:21.797894   22579 main.go:141] libmachine: (ha-908878-m02)   <features>
	I0528 20:39:21.797899   22579 main.go:141] libmachine: (ha-908878-m02)     <acpi/>
	I0528 20:39:21.797903   22579 main.go:141] libmachine: (ha-908878-m02)     <apic/>
	I0528 20:39:21.797909   22579 main.go:141] libmachine: (ha-908878-m02)     <pae/>
	I0528 20:39:21.797913   22579 main.go:141] libmachine: (ha-908878-m02)     
	I0528 20:39:21.797919   22579 main.go:141] libmachine: (ha-908878-m02)   </features>
	I0528 20:39:21.797926   22579 main.go:141] libmachine: (ha-908878-m02)   <cpu mode='host-passthrough'>
	I0528 20:39:21.797931   22579 main.go:141] libmachine: (ha-908878-m02)   
	I0528 20:39:21.797937   22579 main.go:141] libmachine: (ha-908878-m02)   </cpu>
	I0528 20:39:21.797962   22579 main.go:141] libmachine: (ha-908878-m02)   <os>
	I0528 20:39:21.797988   22579 main.go:141] libmachine: (ha-908878-m02)     <type>hvm</type>
	I0528 20:39:21.797999   22579 main.go:141] libmachine: (ha-908878-m02)     <boot dev='cdrom'/>
	I0528 20:39:21.798010   22579 main.go:141] libmachine: (ha-908878-m02)     <boot dev='hd'/>
	I0528 20:39:21.798019   22579 main.go:141] libmachine: (ha-908878-m02)     <bootmenu enable='no'/>
	I0528 20:39:21.798030   22579 main.go:141] libmachine: (ha-908878-m02)   </os>
	I0528 20:39:21.798038   22579 main.go:141] libmachine: (ha-908878-m02)   <devices>
	I0528 20:39:21.798050   22579 main.go:141] libmachine: (ha-908878-m02)     <disk type='file' device='cdrom'>
	I0528 20:39:21.798063   22579 main.go:141] libmachine: (ha-908878-m02)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/boot2docker.iso'/>
	I0528 20:39:21.798075   22579 main.go:141] libmachine: (ha-908878-m02)       <target dev='hdc' bus='scsi'/>
	I0528 20:39:21.798084   22579 main.go:141] libmachine: (ha-908878-m02)       <readonly/>
	I0528 20:39:21.798100   22579 main.go:141] libmachine: (ha-908878-m02)     </disk>
	I0528 20:39:21.798115   22579 main.go:141] libmachine: (ha-908878-m02)     <disk type='file' device='disk'>
	I0528 20:39:21.798128   22579 main.go:141] libmachine: (ha-908878-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0528 20:39:21.798146   22579 main.go:141] libmachine: (ha-908878-m02)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/ha-908878-m02.rawdisk'/>
	I0528 20:39:21.798158   22579 main.go:141] libmachine: (ha-908878-m02)       <target dev='hda' bus='virtio'/>
	I0528 20:39:21.798182   22579 main.go:141] libmachine: (ha-908878-m02)     </disk>
	I0528 20:39:21.798204   22579 main.go:141] libmachine: (ha-908878-m02)     <interface type='network'>
	I0528 20:39:21.798219   22579 main.go:141] libmachine: (ha-908878-m02)       <source network='mk-ha-908878'/>
	I0528 20:39:21.798236   22579 main.go:141] libmachine: (ha-908878-m02)       <model type='virtio'/>
	I0528 20:39:21.798248   22579 main.go:141] libmachine: (ha-908878-m02)     </interface>
	I0528 20:39:21.798259   22579 main.go:141] libmachine: (ha-908878-m02)     <interface type='network'>
	I0528 20:39:21.798267   22579 main.go:141] libmachine: (ha-908878-m02)       <source network='default'/>
	I0528 20:39:21.798275   22579 main.go:141] libmachine: (ha-908878-m02)       <model type='virtio'/>
	I0528 20:39:21.798285   22579 main.go:141] libmachine: (ha-908878-m02)     </interface>
	I0528 20:39:21.798296   22579 main.go:141] libmachine: (ha-908878-m02)     <serial type='pty'>
	I0528 20:39:21.798318   22579 main.go:141] libmachine: (ha-908878-m02)       <target port='0'/>
	I0528 20:39:21.798337   22579 main.go:141] libmachine: (ha-908878-m02)     </serial>
	I0528 20:39:21.798350   22579 main.go:141] libmachine: (ha-908878-m02)     <console type='pty'>
	I0528 20:39:21.798363   22579 main.go:141] libmachine: (ha-908878-m02)       <target type='serial' port='0'/>
	I0528 20:39:21.798375   22579 main.go:141] libmachine: (ha-908878-m02)     </console>
	I0528 20:39:21.798392   22579 main.go:141] libmachine: (ha-908878-m02)     <rng model='virtio'>
	I0528 20:39:21.798407   22579 main.go:141] libmachine: (ha-908878-m02)       <backend model='random'>/dev/random</backend>
	I0528 20:39:21.798417   22579 main.go:141] libmachine: (ha-908878-m02)     </rng>
	I0528 20:39:21.798425   22579 main.go:141] libmachine: (ha-908878-m02)     
	I0528 20:39:21.798441   22579 main.go:141] libmachine: (ha-908878-m02)     
	I0528 20:39:21.798453   22579 main.go:141] libmachine: (ha-908878-m02)   </devices>
	I0528 20:39:21.798467   22579 main.go:141] libmachine: (ha-908878-m02) </domain>
	I0528 20:39:21.798481   22579 main.go:141] libmachine: (ha-908878-m02) 
	I0528 20:39:21.805065   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:0b:f1:4c in network default
	I0528 20:39:21.805662   22579 main.go:141] libmachine: (ha-908878-m02) Ensuring networks are active...
	I0528 20:39:21.805688   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:21.806449   22579 main.go:141] libmachine: (ha-908878-m02) Ensuring network default is active
	I0528 20:39:21.806884   22579 main.go:141] libmachine: (ha-908878-m02) Ensuring network mk-ha-908878 is active
	I0528 20:39:21.807245   22579 main.go:141] libmachine: (ha-908878-m02) Getting domain xml...
	I0528 20:39:21.808093   22579 main.go:141] libmachine: (ha-908878-m02) Creating domain...
	I0528 20:39:22.994138   22579 main.go:141] libmachine: (ha-908878-m02) Waiting to get IP...
	I0528 20:39:22.994884   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:22.995242   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:22.995300   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:22.995219   22978 retry.go:31] will retry after 236.223184ms: waiting for machine to come up
	I0528 20:39:23.232819   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:23.233218   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:23.233277   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:23.233197   22978 retry.go:31] will retry after 315.81749ms: waiting for machine to come up
	I0528 20:39:23.550722   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:23.551140   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:23.551166   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:23.551081   22978 retry.go:31] will retry after 387.67089ms: waiting for machine to come up
	I0528 20:39:23.940625   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:23.941028   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:23.941079   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:23.941011   22978 retry.go:31] will retry after 586.027605ms: waiting for machine to come up
	I0528 20:39:24.528941   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:24.529437   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:24.529464   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:24.529398   22978 retry.go:31] will retry after 558.346168ms: waiting for machine to come up
	I0528 20:39:25.088820   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:25.089261   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:25.089288   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:25.089229   22978 retry.go:31] will retry after 709.318188ms: waiting for machine to come up
	I0528 20:39:25.800541   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:25.801231   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:25.801256   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:25.801190   22978 retry.go:31] will retry after 727.346159ms: waiting for machine to come up
	I0528 20:39:26.530258   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:26.530750   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:26.530771   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:26.530692   22978 retry.go:31] will retry after 1.245703569s: waiting for machine to come up
	I0528 20:39:27.778331   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:27.778725   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:27.778748   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:27.778680   22978 retry.go:31] will retry after 1.486203146s: waiting for machine to come up
	I0528 20:39:29.267214   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:29.267633   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:29.267655   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:29.267589   22978 retry.go:31] will retry after 1.41229564s: waiting for machine to come up
	I0528 20:39:30.681044   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:30.681465   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:30.681496   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:30.681415   22978 retry.go:31] will retry after 2.449880559s: waiting for machine to come up
	I0528 20:39:33.133397   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:33.133838   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:33.133877   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:33.133803   22978 retry.go:31] will retry after 2.454593184s: waiting for machine to come up
	I0528 20:39:35.590824   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:35.591198   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:35.591220   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:35.591164   22978 retry.go:31] will retry after 4.393795339s: waiting for machine to come up
	I0528 20:39:39.986744   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:39.987158   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:39.987193   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:39.987105   22978 retry.go:31] will retry after 3.53535555s: waiting for machine to come up
	I0528 20:39:43.525125   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.525616   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has current primary IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.525648   22579 main.go:141] libmachine: (ha-908878-m02) Found IP for machine: 192.168.39.239
	I0528 20:39:43.525672   22579 main.go:141] libmachine: (ha-908878-m02) Reserving static IP address...
	I0528 20:39:43.526027   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find host DHCP lease matching {name: "ha-908878-m02", mac: "52:54:00:b4:bd:28", ip: "192.168.39.239"} in network mk-ha-908878
	I0528 20:39:43.595257   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Getting to WaitForSSH function...
	I0528 20:39:43.595287   22579 main.go:141] libmachine: (ha-908878-m02) Reserved static IP address: 192.168.39.239
	I0528 20:39:43.595306   22579 main.go:141] libmachine: (ha-908878-m02) Waiting for SSH to be available...
	I0528 20:39:43.597568   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.597963   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:43.597992   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.598141   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Using SSH client type: external
	I0528 20:39:43.598168   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa (-rw-------)
	I0528 20:39:43.598198   22579 main.go:141] libmachine: (ha-908878-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 20:39:43.598207   22579 main.go:141] libmachine: (ha-908878-m02) DBG | About to run SSH command:
	I0528 20:39:43.598256   22579 main.go:141] libmachine: (ha-908878-m02) DBG | exit 0
	I0528 20:39:43.721955   22579 main.go:141] libmachine: (ha-908878-m02) DBG | SSH cmd err, output: <nil>: 
	I0528 20:39:43.722226   22579 main.go:141] libmachine: (ha-908878-m02) KVM machine creation complete!
	I0528 20:39:43.722619   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetConfigRaw
	I0528 20:39:43.723230   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:43.723435   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:43.723579   22579 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0528 20:39:43.723597   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetState
	I0528 20:39:43.725144   22579 main.go:141] libmachine: Detecting operating system of created instance...
	I0528 20:39:43.725194   22579 main.go:141] libmachine: Waiting for SSH to be available...
	I0528 20:39:43.725210   22579 main.go:141] libmachine: Getting to WaitForSSH function...
	I0528 20:39:43.725222   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:43.727491   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.727810   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:43.727833   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.727949   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:43.728111   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:43.728269   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:43.728388   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:43.728528   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:39:43.728719   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0528 20:39:43.728730   22579 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0528 20:39:43.828757   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:39:43.828777   22579 main.go:141] libmachine: Detecting the provisioner...
	I0528 20:39:43.828784   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:43.831460   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.831804   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:43.831830   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.831937   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:43.832131   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:43.832315   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:43.832471   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:43.832653   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:39:43.832802   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0528 20:39:43.832812   22579 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0528 20:39:43.934676   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0528 20:39:43.934746   22579 main.go:141] libmachine: found compatible host: buildroot
	I0528 20:39:43.934760   22579 main.go:141] libmachine: Provisioning with buildroot...
	I0528 20:39:43.934772   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetMachineName
	I0528 20:39:43.935019   22579 buildroot.go:166] provisioning hostname "ha-908878-m02"
	I0528 20:39:43.935042   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetMachineName
	I0528 20:39:43.935200   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:43.937676   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.937997   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:43.938028   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.938141   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:43.938335   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:43.938484   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:43.938636   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:43.938801   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:39:43.939009   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0528 20:39:43.939022   22579 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-908878-m02 && echo "ha-908878-m02" | sudo tee /etc/hostname
	I0528 20:39:44.056989   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-908878-m02
	
	I0528 20:39:44.057024   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.059725   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.060086   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.060114   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.060270   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.060431   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.060580   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.060743   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.060929   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:39:44.061103   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0528 20:39:44.061126   22579 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-908878-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-908878-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-908878-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 20:39:44.172823   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:39:44.172854   22579 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 20:39:44.172872   22579 buildroot.go:174] setting up certificates
	I0528 20:39:44.172884   22579 provision.go:84] configureAuth start
	I0528 20:39:44.172898   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetMachineName
	I0528 20:39:44.173203   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:39:44.175787   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.176184   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.176210   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.176376   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.178910   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.179269   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.179293   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.179411   22579 provision.go:143] copyHostCerts
	I0528 20:39:44.179444   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:39:44.179485   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 20:39:44.179496   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:39:44.179581   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 20:39:44.179667   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:39:44.179691   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 20:39:44.179698   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:39:44.179741   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 20:39:44.179833   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:39:44.179858   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 20:39:44.179864   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:39:44.179904   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 20:39:44.179969   22579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.ha-908878-m02 san=[127.0.0.1 192.168.39.239 ha-908878-m02 localhost minikube]
	I0528 20:39:44.294298   22579 provision.go:177] copyRemoteCerts
	I0528 20:39:44.294358   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 20:39:44.294386   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.297020   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.297346   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.297374   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.297539   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.297731   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.297887   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.298017   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	I0528 20:39:44.379975   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0528 20:39:44.380050   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 20:39:44.403551   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0528 20:39:44.403610   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 20:39:44.426107   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0528 20:39:44.426156   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 20:39:44.448590   22579 provision.go:87] duration metric: took 275.694841ms to configureAuth
	I0528 20:39:44.448611   22579 buildroot.go:189] setting minikube options for container-runtime
	I0528 20:39:44.448776   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:39:44.448836   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.451296   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.451597   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.451616   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.451810   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.452002   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.452165   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.452323   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.452459   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:39:44.452620   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0528 20:39:44.452641   22579 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 20:39:44.718253   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 20:39:44.718288   22579 main.go:141] libmachine: Checking connection to Docker...
	I0528 20:39:44.718297   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetURL
	I0528 20:39:44.719624   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Using libvirt version 6000000
	I0528 20:39:44.721831   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.722136   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.722157   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.722325   22579 main.go:141] libmachine: Docker is up and running!
	I0528 20:39:44.722345   22579 main.go:141] libmachine: Reticulating splines...
	I0528 20:39:44.722352   22579 client.go:171] duration metric: took 23.290767933s to LocalClient.Create
	I0528 20:39:44.722377   22579 start.go:167] duration metric: took 23.290828842s to libmachine.API.Create "ha-908878"
	I0528 20:39:44.722388   22579 start.go:293] postStartSetup for "ha-908878-m02" (driver="kvm2")
	I0528 20:39:44.722397   22579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 20:39:44.722412   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:44.722616   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 20:39:44.722640   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.724676   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.725039   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.725064   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.725193   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.725344   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.725493   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.725603   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	I0528 20:39:44.804629   22579 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 20:39:44.808851   22579 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 20:39:44.808874   22579 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 20:39:44.808942   22579 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 20:39:44.809096   22579 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 20:39:44.809120   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /etc/ssl/certs/117602.pem
	I0528 20:39:44.809272   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 20:39:44.819237   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:39:44.842672   22579 start.go:296] duration metric: took 120.272701ms for postStartSetup
	I0528 20:39:44.842716   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetConfigRaw
	I0528 20:39:44.843241   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:39:44.845666   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.846038   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.846058   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.846224   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:39:44.846399   22579 start.go:128] duration metric: took 23.432774452s to createHost
	I0528 20:39:44.846419   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.848699   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.849056   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.849072   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.849201   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.849377   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.849515   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.849620   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.849743   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:39:44.849917   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0528 20:39:44.849928   22579 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 20:39:44.951339   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716928784.926939138
	
	I0528 20:39:44.951359   22579 fix.go:216] guest clock: 1716928784.926939138
	I0528 20:39:44.951378   22579 fix.go:229] Guest: 2024-05-28 20:39:44.926939138 +0000 UTC Remote: 2024-05-28 20:39:44.846410206 +0000 UTC m=+76.370697906 (delta=80.528932ms)
	I0528 20:39:44.951409   22579 fix.go:200] guest clock delta is within tolerance: 80.528932ms
	I0528 20:39:44.951416   22579 start.go:83] releasing machines lock for "ha-908878-m02", held for 23.537904811s
	I0528 20:39:44.951434   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:44.951692   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:39:44.954325   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.954702   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.954724   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.956651   22579 out.go:177] * Found network options:
	I0528 20:39:44.958031   22579 out.go:177]   - NO_PROXY=192.168.39.100
	W0528 20:39:44.959255   22579 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 20:39:44.959295   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:44.959786   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:44.959957   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:44.960029   22579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 20:39:44.960075   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	W0528 20:39:44.960148   22579 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 20:39:44.960221   22579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 20:39:44.960242   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.962556   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.962879   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.962909   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.962929   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.963063   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.963233   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.963371   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.963394   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.963393   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.963504   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.963569   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	I0528 20:39:44.963622   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.963717   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.963851   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	I0528 20:39:45.191050   22579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 20:39:45.197540   22579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 20:39:45.197614   22579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 20:39:45.213549   22579 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 20:39:45.213573   22579 start.go:494] detecting cgroup driver to use...
	I0528 20:39:45.213632   22579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 20:39:45.229419   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 20:39:45.243034   22579 docker.go:217] disabling cri-docker service (if available) ...
	I0528 20:39:45.243096   22579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 20:39:45.256232   22579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 20:39:45.269876   22579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 20:39:45.388677   22579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 20:39:45.532176   22579 docker.go:233] disabling docker service ...
	I0528 20:39:45.532248   22579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 20:39:45.547274   22579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 20:39:45.559583   22579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 20:39:45.693293   22579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 20:39:45.828110   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 20:39:45.844272   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 20:39:45.862898   22579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 20:39:45.862963   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.872981   22579 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 20:39:45.873042   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.882982   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.892793   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.902631   22579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 20:39:45.912838   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.922547   22579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.939496   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.949578   22579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 20:39:45.958529   22579 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 20:39:45.958578   22579 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 20:39:45.971321   22579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 20:39:45.980291   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:39:46.096244   22579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 20:39:46.234036   22579 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 20:39:46.234107   22579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 20:39:46.239030   22579 start.go:562] Will wait 60s for crictl version
	I0528 20:39:46.239075   22579 ssh_runner.go:195] Run: which crictl
	I0528 20:39:46.242841   22579 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 20:39:46.284071   22579 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 20:39:46.284155   22579 ssh_runner.go:195] Run: crio --version
	I0528 20:39:46.311989   22579 ssh_runner.go:195] Run: crio --version
	I0528 20:39:46.344750   22579 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 20:39:46.346078   22579 out.go:177]   - env NO_PROXY=192.168.39.100
	I0528 20:39:46.347390   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:39:46.350120   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:46.350476   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:46.350500   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:46.350656   22579 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 20:39:46.354730   22579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:39:46.366962   22579 mustload.go:65] Loading cluster: ha-908878
	I0528 20:39:46.367142   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:39:46.367396   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:46.367427   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:46.382472   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33255
	I0528 20:39:46.382858   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:46.383291   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:46.383311   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:46.383606   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:46.383785   22579 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:39:46.385324   22579 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:39:46.385658   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:46.385689   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:46.399803   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42729
	I0528 20:39:46.400242   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:46.400660   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:46.400680   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:46.400973   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:46.401158   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:39:46.401309   22579 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878 for IP: 192.168.39.239
	I0528 20:39:46.401319   22579 certs.go:194] generating shared ca certs ...
	I0528 20:39:46.401332   22579 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:39:46.401442   22579 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 20:39:46.401476   22579 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 20:39:46.401485   22579 certs.go:256] generating profile certs ...
	I0528 20:39:46.401544   22579 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key
	I0528 20:39:46.401568   22579 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.d42c2f8b
	I0528 20:39:46.401581   22579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.d42c2f8b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.239 192.168.39.254]
	I0528 20:39:46.532027   22579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.d42c2f8b ...
	I0528 20:39:46.532054   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.d42c2f8b: {Name:mk5230ac00b5ed8d9e975e2641c42648f309e058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:39:46.532238   22579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.d42c2f8b ...
	I0528 20:39:46.532258   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.d42c2f8b: {Name:mk7d4a0cf0ce90f7f8946c2980e1db3d0d9e0d90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:39:46.532356   22579 certs.go:381] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.d42c2f8b -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt
	I0528 20:39:46.532490   22579 certs.go:385] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.d42c2f8b -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key
	I0528 20:39:46.532608   22579 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key
	I0528 20:39:46.532622   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 20:39:46.532634   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0528 20:39:46.532645   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 20:39:46.532658   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 20:39:46.532670   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 20:39:46.532679   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 20:39:46.532689   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 20:39:46.532697   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 20:39:46.532746   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 20:39:46.532771   22579 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 20:39:46.532782   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 20:39:46.532814   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 20:39:46.532848   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 20:39:46.532877   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 20:39:46.532933   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:39:46.532972   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:39:46.532993   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem -> /usr/share/ca-certificates/11760.pem
	I0528 20:39:46.533006   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /usr/share/ca-certificates/117602.pem
	I0528 20:39:46.533038   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:39:46.535807   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:46.536152   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:39:46.536181   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:46.536309   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:39:46.536490   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:39:46.536657   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:39:46.536781   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:39:46.610132   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0528 20:39:46.615159   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0528 20:39:46.626019   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0528 20:39:46.630287   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0528 20:39:46.641191   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0528 20:39:46.645573   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0528 20:39:46.655284   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0528 20:39:46.659505   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0528 20:39:46.669628   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0528 20:39:46.673931   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0528 20:39:46.684107   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0528 20:39:46.688245   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0528 20:39:46.698832   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 20:39:46.725261   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 20:39:46.750358   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 20:39:46.774000   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 20:39:46.797471   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0528 20:39:46.820832   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 20:39:46.844123   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 20:39:46.866834   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 20:39:46.889813   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 20:39:46.913363   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 20:39:46.937387   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 20:39:46.961455   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0528 20:39:46.977450   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0528 20:39:46.993498   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0528 20:39:47.009273   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0528 20:39:47.025204   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0528 20:39:47.043162   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0528 20:39:47.061042   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0528 20:39:47.076891   22579 ssh_runner.go:195] Run: openssl version
	I0528 20:39:47.082486   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 20:39:47.092423   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:39:47.096733   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:39:47.096777   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:39:47.102259   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 20:39:47.112355   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 20:39:47.122538   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 20:39:47.126830   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 20:39:47.126886   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 20:39:47.132503   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 20:39:47.143222   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 20:39:47.154480   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 20:39:47.159114   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 20:39:47.159167   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 20:39:47.164874   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 20:39:47.177374   22579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 20:39:47.181611   22579 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 20:39:47.181662   22579 kubeadm.go:928] updating node {m02 192.168.39.239 8443 v1.30.1 crio true true} ...
	I0528 20:39:47.181750   22579 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-908878-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 20:39:47.181801   22579 kube-vip.go:115] generating kube-vip config ...
	I0528 20:39:47.181841   22579 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 20:39:47.198400   22579 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 20:39:47.198460   22579 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0528 20:39:47.198505   22579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 20:39:47.207958   22579 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0528 20:39:47.208011   22579 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0528 20:39:47.217671   22579 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0528 20:39:47.217699   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 20:39:47.217779   22579 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0528 20:39:47.217790   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 20:39:47.217810   22579 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0528 20:39:47.222049   22579 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0528 20:39:47.222071   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0528 20:39:48.311013   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 20:39:48.311112   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 20:39:48.317311   22579 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0528 20:39:48.317361   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0528 20:39:48.705592   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:39:48.720396   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 20:39:48.720483   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 20:39:48.724928   22579 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0528 20:39:48.724952   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0528 20:39:49.138641   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0528 20:39:49.148692   22579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0528 20:39:49.164623   22579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 20:39:49.179922   22579 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0528 20:39:49.196215   22579 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0528 20:39:49.199952   22579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:39:49.212984   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:39:49.342900   22579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:39:49.359935   22579 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:39:49.360416   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:49.360472   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:49.375579   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40659
	I0528 20:39:49.376059   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:49.376504   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:49.376526   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:49.376863   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:49.377123   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:39:49.377295   22579 start.go:316] joinCluster: &{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:39:49.377389   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0528 20:39:49.377413   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:39:49.380241   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:49.380644   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:39:49.380672   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:49.380804   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:39:49.380994   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:39:49.381127   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:39:49.381277   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:39:49.527440   22579 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:39:49.527480   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x9w3sn.kua0mpya9sje97dw --discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-908878-m02 --control-plane --apiserver-advertise-address=192.168.39.239 --apiserver-bind-port=8443"
	I0528 20:40:11.273238   22579 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x9w3sn.kua0mpya9sje97dw --discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-908878-m02 --control-plane --apiserver-advertise-address=192.168.39.239 --apiserver-bind-port=8443": (21.74572677s)
	I0528 20:40:11.273280   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0528 20:40:11.742426   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-908878-m02 minikube.k8s.io/updated_at=2024_05_28T20_40_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=ha-908878 minikube.k8s.io/primary=false
	I0528 20:40:11.874237   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-908878-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0528 20:40:11.982143   22579 start.go:318] duration metric: took 22.604844073s to joinCluster
	I0528 20:40:11.982217   22579 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:40:11.983388   22579 out.go:177] * Verifying Kubernetes components...
	I0528 20:40:11.982510   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:40:11.984848   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:40:12.282196   22579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:40:12.356715   22579 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:40:12.357043   22579 kapi.go:59] client config for ha-908878: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.crt", KeyFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key", CAFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf8220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0528 20:40:12.357103   22579 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.100:8443
	I0528 20:40:12.357362   22579 node_ready.go:35] waiting up to 6m0s for node "ha-908878-m02" to be "Ready" ...
	I0528 20:40:12.357456   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:12.357466   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:12.357476   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:12.357481   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:12.367427   22579 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0528 20:40:12.858397   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:12.858419   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:12.858428   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:12.858432   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:12.862303   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:13.358491   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:13.358514   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:13.358521   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:13.358524   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:13.361452   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:13.858081   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:13.858106   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:13.858116   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:13.858121   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:13.863341   22579 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 20:40:14.357549   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:14.357570   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:14.357577   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:14.357582   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:14.360390   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:14.360996   22579 node_ready.go:53] node "ha-908878-m02" has status "Ready":"False"
	I0528 20:40:14.857913   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:14.857933   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:14.857941   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:14.857946   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:14.860547   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:15.357577   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:15.357599   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:15.357607   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:15.357612   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:15.361031   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:15.858180   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:15.858201   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:15.858212   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:15.858219   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:15.860946   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:16.357955   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:16.357981   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:16.357990   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:16.357995   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:16.361851   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:16.362569   22579 node_ready.go:53] node "ha-908878-m02" has status "Ready":"False"
	I0528 20:40:16.858218   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:16.858246   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:16.858258   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:16.858265   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:16.861523   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:17.357611   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:17.357641   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:17.357652   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:17.357657   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:17.361228   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:17.858310   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:17.858332   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:17.858341   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:17.858346   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:17.861833   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:18.357663   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:18.357687   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:18.357696   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:18.357701   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:18.360946   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:18.857606   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:18.857625   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:18.857633   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:18.857636   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:18.860654   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:18.861375   22579 node_ready.go:53] node "ha-908878-m02" has status "Ready":"False"
	I0528 20:40:19.357640   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:19.357667   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.357679   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.357684   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.360599   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.858085   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:19.858107   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.858114   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.858117   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.861490   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:19.862265   22579 node_ready.go:49] node "ha-908878-m02" has status "Ready":"True"
	I0528 20:40:19.862303   22579 node_ready.go:38] duration metric: took 7.504907421s for node "ha-908878-m02" to be "Ready" ...
	I0528 20:40:19.862314   22579 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:40:19.862372   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:40:19.862383   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.862393   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.862401   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.868588   22579 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 20:40:19.876604   22579 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5fmns" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.876682   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5fmns
	I0528 20:40:19.876694   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.876701   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.876707   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.879285   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.879865   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:19.879880   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.879887   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.879890   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.882172   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.882699   22579 pod_ready.go:92] pod "coredns-7db6d8ff4d-5fmns" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:19.882717   22579 pod_ready.go:81] duration metric: took 6.090072ms for pod "coredns-7db6d8ff4d-5fmns" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.882727   22579 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mvx67" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.882818   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mvx67
	I0528 20:40:19.882830   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.882840   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.882846   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.885132   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.885668   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:19.885681   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.885687   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.885692   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.888785   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:19.889868   22579 pod_ready.go:92] pod "coredns-7db6d8ff4d-mvx67" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:19.889886   22579 pod_ready.go:81] duration metric: took 7.150945ms for pod "coredns-7db6d8ff4d-mvx67" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.889896   22579 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.889949   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878
	I0528 20:40:19.889961   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.889969   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.889974   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.892607   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.893158   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:19.893170   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.893176   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.893178   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.895416   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.895980   22579 pod_ready.go:92] pod "etcd-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:19.895995   22579 pod_ready.go:81] duration metric: took 6.092752ms for pod "etcd-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.896002   22579 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.896052   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:19.896063   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.896073   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.896081   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.898796   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.899295   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:19.899307   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.899314   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.899318   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.901893   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:20.396912   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:20.396935   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:20.396947   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:20.396951   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:20.399862   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:20.400491   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:20.400507   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:20.400514   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:20.400518   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:20.402904   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:20.897130   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:20.897152   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:20.897159   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:20.897162   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:20.900991   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:20.902387   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:20.902400   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:20.902407   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:20.902411   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:20.905660   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:21.397166   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:21.397185   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:21.397192   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:21.397196   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:21.400647   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:21.401700   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:21.401715   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:21.401724   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:21.401729   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:21.404362   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:21.897123   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:21.897145   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:21.897154   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:21.897163   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:21.900081   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:21.900724   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:21.900738   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:21.900747   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:21.900752   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:21.903031   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:21.903517   22579 pod_ready.go:102] pod "etcd-ha-908878-m02" in "kube-system" namespace has status "Ready":"False"
	I0528 20:40:22.396851   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:22.396873   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:22.396881   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:22.396886   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:22.399923   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:22.400771   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:22.400785   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:22.400792   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:22.400796   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:22.404167   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:22.896542   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:22.896564   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:22.896582   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:22.896587   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:22.899781   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:22.900729   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:22.900742   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:22.900750   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:22.900754   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:22.903452   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:23.396319   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:23.396345   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:23.396353   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:23.396357   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:23.399657   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:23.400305   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:23.400318   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:23.400325   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:23.400328   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:23.403206   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:23.896164   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:23.896186   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:23.896194   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:23.896198   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:23.899319   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:23.900141   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:23.900158   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:23.900168   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:23.900172   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:23.902648   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:24.397151   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:24.397171   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:24.397179   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:24.397184   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:24.400324   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:24.400944   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:24.400960   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:24.400967   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:24.400971   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:24.403693   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:24.404232   22579 pod_ready.go:102] pod "etcd-ha-908878-m02" in "kube-system" namespace has status "Ready":"False"
	I0528 20:40:24.896515   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:24.896539   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:24.896545   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:24.896549   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:24.900241   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:24.901072   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:24.901088   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:24.901097   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:24.901104   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:24.903768   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:25.396925   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:25.396948   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:25.396961   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:25.396968   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:25.399876   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:25.400663   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:25.400679   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:25.400689   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:25.400694   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:25.403298   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:25.896184   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:25.896207   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:25.896215   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:25.896220   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:25.899491   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:25.900143   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:25.900158   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:25.900166   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:25.900171   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:25.902799   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:26.396296   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:26.396315   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:26.396322   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:26.396327   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:26.399795   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:26.400376   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:26.400389   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:26.400397   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:26.400400   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:26.402974   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:26.896709   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:26.896730   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:26.896738   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:26.896744   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:26.899957   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:26.900709   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:26.900724   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:26.900731   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:26.900735   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:26.904001   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:26.905064   22579 pod_ready.go:102] pod "etcd-ha-908878-m02" in "kube-system" namespace has status "Ready":"False"
	I0528 20:40:27.396489   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:27.396510   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:27.396518   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:27.396522   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:27.401472   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:40:27.402071   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:27.402087   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:27.402094   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:27.402099   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:27.404615   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:27.896445   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:27.896470   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:27.896480   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:27.896487   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:27.899580   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:27.900404   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:27.900420   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:27.900428   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:27.900433   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:27.902851   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:28.396998   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:28.397031   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:28.397043   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:28.397048   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:28.400447   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:28.401281   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:28.401296   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:28.401305   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:28.401309   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:28.404064   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:28.896987   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:28.897012   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:28.897022   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:28.897033   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:28.900986   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:28.901922   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:28.901935   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:28.901942   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:28.901945   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:28.904836   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:28.905437   22579 pod_ready.go:102] pod "etcd-ha-908878-m02" in "kube-system" namespace has status "Ready":"False"
	I0528 20:40:29.396351   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:29.396371   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.396379   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.396383   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.399460   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:29.400020   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:29.400034   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.400041   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.400044   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.402556   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.896484   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:29.896519   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.896526   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.896530   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.899524   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.900201   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:29.900216   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.900222   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.900228   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.902578   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.903194   22579 pod_ready.go:92] pod "etcd-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:29.903215   22579 pod_ready.go:81] duration metric: took 10.007205948s for pod "etcd-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.903233   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.903288   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-908878
	I0528 20:40:29.903291   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.903298   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.903302   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.905453   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.906183   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:29.906200   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.906210   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.906221   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.908470   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.909003   22579 pod_ready.go:92] pod "kube-apiserver-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:29.909021   22579 pod_ready.go:81] duration metric: took 5.781531ms for pod "kube-apiserver-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.909029   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.909072   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-908878-m02
	I0528 20:40:29.909079   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.909086   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.909094   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.911338   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.911924   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:29.911937   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.911944   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.911948   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.913819   22579 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 20:40:29.914272   22579 pod_ready.go:92] pod "kube-apiserver-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:29.914287   22579 pod_ready.go:81] duration metric: took 5.252021ms for pod "kube-apiserver-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.914295   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.914342   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878
	I0528 20:40:29.914351   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.914357   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.914361   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.917445   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:29.918464   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:29.918479   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.918487   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.918493   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.920744   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.921259   22579 pod_ready.go:92] pod "kube-controller-manager-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:29.921275   22579 pod_ready.go:81] duration metric: took 6.973107ms for pod "kube-controller-manager-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.921282   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.921319   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878-m02
	I0528 20:40:29.921326   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.921332   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.921338   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.923660   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.924192   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:29.924207   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.924214   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.924219   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.926370   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.926754   22579 pod_ready.go:92] pod "kube-controller-manager-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:29.926766   22579 pod_ready.go:81] duration metric: took 5.478491ms for pod "kube-controller-manager-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.926773   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ng8mq" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:30.097135   22579 request.go:629] Waited for 170.31592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ng8mq
	I0528 20:40:30.097186   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ng8mq
	I0528 20:40:30.097191   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:30.097198   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:30.097204   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:30.100581   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:30.297511   22579 request.go:629] Waited for 196.357126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:30.297569   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:30.297574   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:30.297581   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:30.297597   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:30.301046   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:30.301833   22579 pod_ready.go:92] pod "kube-proxy-ng8mq" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:30.301850   22579 pod_ready.go:81] duration metric: took 375.071009ms for pod "kube-proxy-ng8mq" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:30.301861   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pg89k" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:30.497059   22579 request.go:629] Waited for 195.119235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pg89k
	I0528 20:40:30.497120   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pg89k
	I0528 20:40:30.497127   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:30.497137   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:30.497146   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:30.500175   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:30.697193   22579 request.go:629] Waited for 195.998479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:30.697246   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:30.697251   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:30.697257   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:30.697261   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:30.700322   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:30.700798   22579 pod_ready.go:92] pod "kube-proxy-pg89k" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:30.700813   22579 pod_ready.go:81] duration metric: took 398.943236ms for pod "kube-proxy-pg89k" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:30.700821   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:30.896880   22579 request.go:629] Waited for 195.997769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878
	I0528 20:40:30.896957   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878
	I0528 20:40:30.896966   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:30.896976   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:30.897004   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:30.900173   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:31.097368   22579 request.go:629] Waited for 196.340666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:31.097417   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:31.097423   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:31.097436   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:31.097442   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:31.101138   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:31.101882   22579 pod_ready.go:92] pod "kube-scheduler-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:31.101905   22579 pod_ready.go:81] duration metric: took 401.07596ms for pod "kube-scheduler-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:31.101917   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:31.296878   22579 request.go:629] Waited for 194.881731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878-m02
	I0528 20:40:31.296951   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878-m02
	I0528 20:40:31.296959   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:31.296970   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:31.296980   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:31.299929   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:31.496970   22579 request.go:629] Waited for 196.357718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:31.497029   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:31.497036   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:31.497047   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:31.497051   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:31.500188   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:31.501110   22579 pod_ready.go:92] pod "kube-scheduler-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:31.501131   22579 pod_ready.go:81] duration metric: took 399.206587ms for pod "kube-scheduler-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:31.501144   22579 pod_ready.go:38] duration metric: took 11.638817981s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:40:31.501161   22579 api_server.go:52] waiting for apiserver process to appear ...
	I0528 20:40:31.501233   22579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:40:31.520498   22579 api_server.go:72] duration metric: took 19.538238682s to wait for apiserver process to appear ...
	I0528 20:40:31.520523   22579 api_server.go:88] waiting for apiserver healthz status ...
	I0528 20:40:31.520543   22579 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0528 20:40:31.526513   22579 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0528 20:40:31.526569   22579 round_trippers.go:463] GET https://192.168.39.100:8443/version
	I0528 20:40:31.526573   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:31.526581   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:31.526585   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:31.527447   22579 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 20:40:31.527535   22579 api_server.go:141] control plane version: v1.30.1
	I0528 20:40:31.527550   22579 api_server.go:131] duration metric: took 7.02174ms to wait for apiserver health ...
	I0528 20:40:31.527557   22579 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 20:40:31.696971   22579 request.go:629] Waited for 169.332456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:40:31.697036   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:40:31.697043   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:31.697054   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:31.697064   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:31.702231   22579 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 20:40:31.706998   22579 system_pods.go:59] 17 kube-system pods found
	I0528 20:40:31.707031   22579 system_pods.go:61] "coredns-7db6d8ff4d-5fmns" [41a3bda1-29ba-4982-baf5-0adc97b4eb45] Running
	I0528 20:40:31.707037   22579 system_pods.go:61] "coredns-7db6d8ff4d-mvx67" [0b51beb7-0397-4008-b878-97edd41c6b94] Running
	I0528 20:40:31.707040   22579 system_pods.go:61] "etcd-ha-908878" [4cfaba35-0bd9-476b-95c2-abd111c4fcac] Running
	I0528 20:40:31.707044   22579 system_pods.go:61] "etcd-ha-908878-m02" [cb4f24be-dbf9-4c42-9a55-29cf6f0b6ecc] Running
	I0528 20:40:31.707047   22579 system_pods.go:61] "kindnet-6prxw" [77fae8b9-3abd-4a39-81ec-cc782b891331] Running
	I0528 20:40:31.707050   22579 system_pods.go:61] "kindnet-x4mzh" [8069a7ea-0ab1-4064-b982-867dbdfd97aa] Running
	I0528 20:40:31.707053   22579 system_pods.go:61] "kube-apiserver-ha-908878" [ff63f2af-3fc5-496c-b468-7447defad5e6] Running
	I0528 20:40:31.707056   22579 system_pods.go:61] "kube-apiserver-ha-908878-m02" [3a56592b-67cd-44d0-8907-2a62d4a6c671] Running
	I0528 20:40:31.707059   22579 system_pods.go:61] "kube-controller-manager-ha-908878" [e426060f-307d-41c7-8fb9-ab48709ce2a8] Running
	I0528 20:40:31.707062   22579 system_pods.go:61] "kube-controller-manager-ha-908878-m02" [232c3f41-5ba8-4fdf-848a-f8fb92f33a73] Running
	I0528 20:40:31.707065   22579 system_pods.go:61] "kube-proxy-ng8mq" [ca0b1264-09c7-44b2-ba8c-e145e825fdbe] Running
	I0528 20:40:31.707068   22579 system_pods.go:61] "kube-proxy-pg89k" [6eeda2cd-7b9e-440f-a8c3-c2ea8015106d] Running
	I0528 20:40:31.707072   22579 system_pods.go:61] "kube-scheduler-ha-908878" [7a9859a9-e92c-435b-a70e-5200f67d9589] Running
	I0528 20:40:31.707078   22579 system_pods.go:61] "kube-scheduler-ha-908878-m02" [c03b5557-cdca-4d39-800e-51a3a4f180b7] Running
	I0528 20:40:31.707081   22579 system_pods.go:61] "kube-vip-ha-908878" [45e2e6e2-6ea1-4498-aca1-9c3060eb9ca4] Running
	I0528 20:40:31.707084   22579 system_pods.go:61] "kube-vip-ha-908878-m02" [bcbc54fb-d0d4-422a-9e42-d61cd3f390ff] Running
	I0528 20:40:31.707089   22579 system_pods.go:61] "storage-provisioner" [d79872e2-b267-446a-99dc-5bf9f398d31c] Running
	I0528 20:40:31.707096   22579 system_pods.go:74] duration metric: took 179.532945ms to wait for pod list to return data ...
	I0528 20:40:31.707107   22579 default_sa.go:34] waiting for default service account to be created ...
	I0528 20:40:31.897544   22579 request.go:629] Waited for 190.352879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0528 20:40:31.897618   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0528 20:40:31.897623   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:31.897630   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:31.897636   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:31.901501   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:31.901704   22579 default_sa.go:45] found service account: "default"
	I0528 20:40:31.901720   22579 default_sa.go:55] duration metric: took 194.607645ms for default service account to be created ...
	I0528 20:40:31.901727   22579 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 20:40:32.097169   22579 request.go:629] Waited for 195.374316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:40:32.097219   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:40:32.097224   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:32.097231   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:32.097256   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:32.102508   22579 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 20:40:32.109708   22579 system_pods.go:86] 17 kube-system pods found
	I0528 20:40:32.110148   22579 system_pods.go:89] "coredns-7db6d8ff4d-5fmns" [41a3bda1-29ba-4982-baf5-0adc97b4eb45] Running
	I0528 20:40:32.110186   22579 system_pods.go:89] "coredns-7db6d8ff4d-mvx67" [0b51beb7-0397-4008-b878-97edd41c6b94] Running
	I0528 20:40:32.110194   22579 system_pods.go:89] "etcd-ha-908878" [4cfaba35-0bd9-476b-95c2-abd111c4fcac] Running
	I0528 20:40:32.110201   22579 system_pods.go:89] "etcd-ha-908878-m02" [cb4f24be-dbf9-4c42-9a55-29cf6f0b6ecc] Running
	I0528 20:40:32.110208   22579 system_pods.go:89] "kindnet-6prxw" [77fae8b9-3abd-4a39-81ec-cc782b891331] Running
	I0528 20:40:32.110213   22579 system_pods.go:89] "kindnet-x4mzh" [8069a7ea-0ab1-4064-b982-867dbdfd97aa] Running
	I0528 20:40:32.110220   22579 system_pods.go:89] "kube-apiserver-ha-908878" [ff63f2af-3fc5-496c-b468-7447defad5e6] Running
	I0528 20:40:32.110227   22579 system_pods.go:89] "kube-apiserver-ha-908878-m02" [3a56592b-67cd-44d0-8907-2a62d4a6c671] Running
	I0528 20:40:32.110234   22579 system_pods.go:89] "kube-controller-manager-ha-908878" [e426060f-307d-41c7-8fb9-ab48709ce2a8] Running
	I0528 20:40:32.110244   22579 system_pods.go:89] "kube-controller-manager-ha-908878-m02" [232c3f41-5ba8-4fdf-848a-f8fb92f33a73] Running
	I0528 20:40:32.110253   22579 system_pods.go:89] "kube-proxy-ng8mq" [ca0b1264-09c7-44b2-ba8c-e145e825fdbe] Running
	I0528 20:40:32.110258   22579 system_pods.go:89] "kube-proxy-pg89k" [6eeda2cd-7b9e-440f-a8c3-c2ea8015106d] Running
	I0528 20:40:32.110264   22579 system_pods.go:89] "kube-scheduler-ha-908878" [7a9859a9-e92c-435b-a70e-5200f67d9589] Running
	I0528 20:40:32.110271   22579 system_pods.go:89] "kube-scheduler-ha-908878-m02" [c03b5557-cdca-4d39-800e-51a3a4f180b7] Running
	I0528 20:40:32.110276   22579 system_pods.go:89] "kube-vip-ha-908878" [45e2e6e2-6ea1-4498-aca1-9c3060eb9ca4] Running
	I0528 20:40:32.110287   22579 system_pods.go:89] "kube-vip-ha-908878-m02" [bcbc54fb-d0d4-422a-9e42-d61cd3f390ff] Running
	I0528 20:40:32.110294   22579 system_pods.go:89] "storage-provisioner" [d79872e2-b267-446a-99dc-5bf9f398d31c] Running
	I0528 20:40:32.110302   22579 system_pods.go:126] duration metric: took 208.569354ms to wait for k8s-apps to be running ...
	I0528 20:40:32.110311   22579 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 20:40:32.110363   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:40:32.126032   22579 system_svc.go:56] duration metric: took 15.712055ms WaitForService to wait for kubelet
	I0528 20:40:32.126069   22579 kubeadm.go:576] duration metric: took 20.143813701s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:40:32.126095   22579 node_conditions.go:102] verifying NodePressure condition ...
	I0528 20:40:32.297495   22579 request.go:629] Waited for 171.325182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes
	I0528 20:40:32.297568   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes
	I0528 20:40:32.297575   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:32.297586   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:32.297595   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:32.301176   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:32.302179   22579 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 20:40:32.302203   22579 node_conditions.go:123] node cpu capacity is 2
	I0528 20:40:32.302223   22579 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 20:40:32.302226   22579 node_conditions.go:123] node cpu capacity is 2
	I0528 20:40:32.302230   22579 node_conditions.go:105] duration metric: took 176.129957ms to run NodePressure ...
	I0528 20:40:32.302240   22579 start.go:240] waiting for startup goroutines ...
	I0528 20:40:32.302273   22579 start.go:254] writing updated cluster config ...
	I0528 20:40:32.304519   22579 out.go:177] 
	I0528 20:40:32.306057   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:40:32.306152   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:40:32.307616   22579 out.go:177] * Starting "ha-908878-m03" control-plane node in "ha-908878" cluster
	I0528 20:40:32.308633   22579 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:40:32.308655   22579 cache.go:56] Caching tarball of preloaded images
	I0528 20:40:32.308744   22579 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 20:40:32.308757   22579 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 20:40:32.308858   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:40:32.309023   22579 start.go:360] acquireMachinesLock for ha-908878-m03: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 20:40:32.309063   22579 start.go:364] duration metric: took 22.465µs to acquireMachinesLock for "ha-908878-m03"
	I0528 20:40:32.309079   22579 start.go:93] Provisioning new machine with config: &{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:40:32.309170   22579 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0528 20:40:32.310490   22579 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 20:40:32.310572   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:40:32.310602   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:40:32.325282   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0528 20:40:32.325769   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:40:32.326253   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:40:32.326275   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:40:32.326564   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:40:32.326778   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetMachineName
	I0528 20:40:32.326890   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:40:32.327052   22579 start.go:159] libmachine.API.Create for "ha-908878" (driver="kvm2")
	I0528 20:40:32.327078   22579 client.go:168] LocalClient.Create starting
	I0528 20:40:32.327105   22579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem
	I0528 20:40:32.327137   22579 main.go:141] libmachine: Decoding PEM data...
	I0528 20:40:32.327168   22579 main.go:141] libmachine: Parsing certificate...
	I0528 20:40:32.327215   22579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem
	I0528 20:40:32.327234   22579 main.go:141] libmachine: Decoding PEM data...
	I0528 20:40:32.327246   22579 main.go:141] libmachine: Parsing certificate...
	I0528 20:40:32.327263   22579 main.go:141] libmachine: Running pre-create checks...
	I0528 20:40:32.327276   22579 main.go:141] libmachine: (ha-908878-m03) Calling .PreCreateCheck
	I0528 20:40:32.327406   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetConfigRaw
	I0528 20:40:32.327766   22579 main.go:141] libmachine: Creating machine...
	I0528 20:40:32.327779   22579 main.go:141] libmachine: (ha-908878-m03) Calling .Create
	I0528 20:40:32.327882   22579 main.go:141] libmachine: (ha-908878-m03) Creating KVM machine...
	I0528 20:40:32.328975   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found existing default KVM network
	I0528 20:40:32.329121   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found existing private KVM network mk-ha-908878
	I0528 20:40:32.329218   22579 main.go:141] libmachine: (ha-908878-m03) Setting up store path in /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03 ...
	I0528 20:40:32.329248   22579 main.go:141] libmachine: (ha-908878-m03) Building disk image from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 20:40:32.329322   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:32.329224   23357 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:40:32.329418   22579 main.go:141] libmachine: (ha-908878-m03) Downloading /home/jenkins/minikube-integration/18966-3963/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 20:40:32.547551   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:32.547423   23357 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa...
	I0528 20:40:32.777813   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:32.777665   23357 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/ha-908878-m03.rawdisk...
	I0528 20:40:32.777853   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Writing magic tar header
	I0528 20:40:32.777892   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Writing SSH key tar header
	I0528 20:40:32.777934   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:32.777826   23357 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03 ...
	I0528 20:40:32.777969   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03
	I0528 20:40:32.777995   22579 main.go:141] libmachine: (ha-908878-m03) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03 (perms=drwx------)
	I0528 20:40:32.778011   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines
	I0528 20:40:32.778027   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:40:32.778041   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963
	I0528 20:40:32.778056   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0528 20:40:32.778068   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home/jenkins
	I0528 20:40:32.778080   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home
	I0528 20:40:32.778096   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Skipping /home - not owner
	I0528 20:40:32.778109   22579 main.go:141] libmachine: (ha-908878-m03) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines (perms=drwxr-xr-x)
	I0528 20:40:32.778124   22579 main.go:141] libmachine: (ha-908878-m03) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube (perms=drwxr-xr-x)
	I0528 20:40:32.778137   22579 main.go:141] libmachine: (ha-908878-m03) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963 (perms=drwxrwxr-x)
	I0528 20:40:32.778150   22579 main.go:141] libmachine: (ha-908878-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0528 20:40:32.778161   22579 main.go:141] libmachine: (ha-908878-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0528 20:40:32.778212   22579 main.go:141] libmachine: (ha-908878-m03) Creating domain...
	I0528 20:40:32.779207   22579 main.go:141] libmachine: (ha-908878-m03) define libvirt domain using xml: 
	I0528 20:40:32.779231   22579 main.go:141] libmachine: (ha-908878-m03) <domain type='kvm'>
	I0528 20:40:32.779243   22579 main.go:141] libmachine: (ha-908878-m03)   <name>ha-908878-m03</name>
	I0528 20:40:32.779250   22579 main.go:141] libmachine: (ha-908878-m03)   <memory unit='MiB'>2200</memory>
	I0528 20:40:32.779259   22579 main.go:141] libmachine: (ha-908878-m03)   <vcpu>2</vcpu>
	I0528 20:40:32.779265   22579 main.go:141] libmachine: (ha-908878-m03)   <features>
	I0528 20:40:32.779273   22579 main.go:141] libmachine: (ha-908878-m03)     <acpi/>
	I0528 20:40:32.779279   22579 main.go:141] libmachine: (ha-908878-m03)     <apic/>
	I0528 20:40:32.779288   22579 main.go:141] libmachine: (ha-908878-m03)     <pae/>
	I0528 20:40:32.779298   22579 main.go:141] libmachine: (ha-908878-m03)     
	I0528 20:40:32.779308   22579 main.go:141] libmachine: (ha-908878-m03)   </features>
	I0528 20:40:32.779330   22579 main.go:141] libmachine: (ha-908878-m03)   <cpu mode='host-passthrough'>
	I0528 20:40:32.779347   22579 main.go:141] libmachine: (ha-908878-m03)   
	I0528 20:40:32.779356   22579 main.go:141] libmachine: (ha-908878-m03)   </cpu>
	I0528 20:40:32.779362   22579 main.go:141] libmachine: (ha-908878-m03)   <os>
	I0528 20:40:32.779375   22579 main.go:141] libmachine: (ha-908878-m03)     <type>hvm</type>
	I0528 20:40:32.779387   22579 main.go:141] libmachine: (ha-908878-m03)     <boot dev='cdrom'/>
	I0528 20:40:32.779397   22579 main.go:141] libmachine: (ha-908878-m03)     <boot dev='hd'/>
	I0528 20:40:32.779404   22579 main.go:141] libmachine: (ha-908878-m03)     <bootmenu enable='no'/>
	I0528 20:40:32.779412   22579 main.go:141] libmachine: (ha-908878-m03)   </os>
	I0528 20:40:32.779420   22579 main.go:141] libmachine: (ha-908878-m03)   <devices>
	I0528 20:40:32.779430   22579 main.go:141] libmachine: (ha-908878-m03)     <disk type='file' device='cdrom'>
	I0528 20:40:32.779445   22579 main.go:141] libmachine: (ha-908878-m03)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/boot2docker.iso'/>
	I0528 20:40:32.779467   22579 main.go:141] libmachine: (ha-908878-m03)       <target dev='hdc' bus='scsi'/>
	I0528 20:40:32.779480   22579 main.go:141] libmachine: (ha-908878-m03)       <readonly/>
	I0528 20:40:32.779491   22579 main.go:141] libmachine: (ha-908878-m03)     </disk>
	I0528 20:40:32.779502   22579 main.go:141] libmachine: (ha-908878-m03)     <disk type='file' device='disk'>
	I0528 20:40:32.779514   22579 main.go:141] libmachine: (ha-908878-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0528 20:40:32.779522   22579 main.go:141] libmachine: (ha-908878-m03)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/ha-908878-m03.rawdisk'/>
	I0528 20:40:32.779531   22579 main.go:141] libmachine: (ha-908878-m03)       <target dev='hda' bus='virtio'/>
	I0528 20:40:32.779536   22579 main.go:141] libmachine: (ha-908878-m03)     </disk>
	I0528 20:40:32.779542   22579 main.go:141] libmachine: (ha-908878-m03)     <interface type='network'>
	I0528 20:40:32.779547   22579 main.go:141] libmachine: (ha-908878-m03)       <source network='mk-ha-908878'/>
	I0528 20:40:32.779572   22579 main.go:141] libmachine: (ha-908878-m03)       <model type='virtio'/>
	I0528 20:40:32.779595   22579 main.go:141] libmachine: (ha-908878-m03)     </interface>
	I0528 20:40:32.779607   22579 main.go:141] libmachine: (ha-908878-m03)     <interface type='network'>
	I0528 20:40:32.779613   22579 main.go:141] libmachine: (ha-908878-m03)       <source network='default'/>
	I0528 20:40:32.779625   22579 main.go:141] libmachine: (ha-908878-m03)       <model type='virtio'/>
	I0528 20:40:32.779636   22579 main.go:141] libmachine: (ha-908878-m03)     </interface>
	I0528 20:40:32.779646   22579 main.go:141] libmachine: (ha-908878-m03)     <serial type='pty'>
	I0528 20:40:32.779657   22579 main.go:141] libmachine: (ha-908878-m03)       <target port='0'/>
	I0528 20:40:32.779667   22579 main.go:141] libmachine: (ha-908878-m03)     </serial>
	I0528 20:40:32.779680   22579 main.go:141] libmachine: (ha-908878-m03)     <console type='pty'>
	I0528 20:40:32.779690   22579 main.go:141] libmachine: (ha-908878-m03)       <target type='serial' port='0'/>
	I0528 20:40:32.779699   22579 main.go:141] libmachine: (ha-908878-m03)     </console>
	I0528 20:40:32.779705   22579 main.go:141] libmachine: (ha-908878-m03)     <rng model='virtio'>
	I0528 20:40:32.779720   22579 main.go:141] libmachine: (ha-908878-m03)       <backend model='random'>/dev/random</backend>
	I0528 20:40:32.779731   22579 main.go:141] libmachine: (ha-908878-m03)     </rng>
	I0528 20:40:32.779742   22579 main.go:141] libmachine: (ha-908878-m03)     
	I0528 20:40:32.779752   22579 main.go:141] libmachine: (ha-908878-m03)     
	I0528 20:40:32.779760   22579 main.go:141] libmachine: (ha-908878-m03)   </devices>
	I0528 20:40:32.779769   22579 main.go:141] libmachine: (ha-908878-m03) </domain>
	I0528 20:40:32.779779   22579 main.go:141] libmachine: (ha-908878-m03) 
	I0528 20:40:32.785969   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:b7:c2:f7 in network default
	I0528 20:40:32.786495   22579 main.go:141] libmachine: (ha-908878-m03) Ensuring networks are active...
	I0528 20:40:32.786513   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:32.787177   22579 main.go:141] libmachine: (ha-908878-m03) Ensuring network default is active
	I0528 20:40:32.787502   22579 main.go:141] libmachine: (ha-908878-m03) Ensuring network mk-ha-908878 is active
	I0528 20:40:32.787897   22579 main.go:141] libmachine: (ha-908878-m03) Getting domain xml...
	I0528 20:40:32.788680   22579 main.go:141] libmachine: (ha-908878-m03) Creating domain...
	I0528 20:40:34.013976   22579 main.go:141] libmachine: (ha-908878-m03) Waiting to get IP...
	I0528 20:40:34.014793   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:34.015195   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:34.015234   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:34.015178   23357 retry.go:31] will retry after 286.936339ms: waiting for machine to come up
	I0528 20:40:34.303824   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:34.304264   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:34.304285   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:34.304222   23357 retry.go:31] will retry after 285.998635ms: waiting for machine to come up
	I0528 20:40:34.591687   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:34.592185   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:34.592210   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:34.592145   23357 retry.go:31] will retry after 486.004926ms: waiting for machine to come up
	I0528 20:40:35.079894   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:35.080366   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:35.080387   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:35.080333   23357 retry.go:31] will retry after 430.172641ms: waiting for machine to come up
	I0528 20:40:35.512130   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:35.512597   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:35.512627   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:35.512550   23357 retry.go:31] will retry after 655.401985ms: waiting for machine to come up
	I0528 20:40:36.169262   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:36.169688   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:36.169718   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:36.169639   23357 retry.go:31] will retry after 953.090401ms: waiting for machine to come up
	I0528 20:40:37.124742   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:37.125027   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:37.125049   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:37.125009   23357 retry.go:31] will retry after 933.575405ms: waiting for machine to come up
	I0528 20:40:38.059832   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:38.060305   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:38.060332   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:38.060253   23357 retry.go:31] will retry after 933.852896ms: waiting for machine to come up
	I0528 20:40:38.995421   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:38.995923   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:38.995949   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:38.995867   23357 retry.go:31] will retry after 1.701447515s: waiting for machine to come up
	I0528 20:40:40.699010   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:40.699492   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:40.699517   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:40.699450   23357 retry.go:31] will retry after 1.616110377s: waiting for machine to come up
	I0528 20:40:42.318070   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:42.318522   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:42.318561   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:42.318452   23357 retry.go:31] will retry after 2.231719862s: waiting for machine to come up
	I0528 20:40:44.553111   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:44.553614   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:44.553644   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:44.553577   23357 retry.go:31] will retry after 2.63642465s: waiting for machine to come up
	I0528 20:40:47.191927   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:47.192245   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:47.192265   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:47.192220   23357 retry.go:31] will retry after 3.239065222s: waiting for machine to come up
	I0528 20:40:50.434633   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:50.435003   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:50.435025   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:50.434967   23357 retry.go:31] will retry after 5.565960506s: waiting for machine to come up
	I0528 20:40:56.004958   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:56.005405   22579 main.go:141] libmachine: (ha-908878-m03) Found IP for machine: 192.168.39.73
	I0528 20:40:56.005430   22579 main.go:141] libmachine: (ha-908878-m03) Reserving static IP address...
	I0528 20:40:56.005443   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has current primary IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:56.005865   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find host DHCP lease matching {name: "ha-908878-m03", mac: "52:54:00:92:3d:20", ip: "192.168.39.73"} in network mk-ha-908878
	I0528 20:40:56.074484   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Getting to WaitForSSH function...
	I0528 20:40:56.074514   22579 main.go:141] libmachine: (ha-908878-m03) Reserved static IP address: 192.168.39.73
	I0528 20:40:56.074530   22579 main.go:141] libmachine: (ha-908878-m03) Waiting for SSH to be available...
	I0528 20:40:56.076890   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:56.077254   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878
	I0528 20:40:56.077279   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find defined IP address of network mk-ha-908878 interface with MAC address 52:54:00:92:3d:20
	I0528 20:40:56.077406   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Using SSH client type: external
	I0528 20:40:56.077429   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa (-rw-------)
	I0528 20:40:56.077461   22579 main.go:141] libmachine: (ha-908878-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 20:40:56.077471   22579 main.go:141] libmachine: (ha-908878-m03) DBG | About to run SSH command:
	I0528 20:40:56.077483   22579 main.go:141] libmachine: (ha-908878-m03) DBG | exit 0
	I0528 20:40:56.081665   22579 main.go:141] libmachine: (ha-908878-m03) DBG | SSH cmd err, output: exit status 255: 
	I0528 20:40:56.081681   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0528 20:40:56.081688   22579 main.go:141] libmachine: (ha-908878-m03) DBG | command : exit 0
	I0528 20:40:56.081697   22579 main.go:141] libmachine: (ha-908878-m03) DBG | err     : exit status 255
	I0528 20:40:56.081729   22579 main.go:141] libmachine: (ha-908878-m03) DBG | output  : 
	I0528 20:40:59.081870   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Getting to WaitForSSH function...
	I0528 20:40:59.084087   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.084505   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.084527   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.084694   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Using SSH client type: external
	I0528 20:40:59.084722   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa (-rw-------)
	I0528 20:40:59.084750   22579 main.go:141] libmachine: (ha-908878-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 20:40:59.084768   22579 main.go:141] libmachine: (ha-908878-m03) DBG | About to run SSH command:
	I0528 20:40:59.084781   22579 main.go:141] libmachine: (ha-908878-m03) DBG | exit 0
	I0528 20:40:59.217703   22579 main.go:141] libmachine: (ha-908878-m03) DBG | SSH cmd err, output: <nil>: 
	I0528 20:40:59.218058   22579 main.go:141] libmachine: (ha-908878-m03) KVM machine creation complete!
	I0528 20:40:59.218352   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetConfigRaw
	I0528 20:40:59.218867   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:40:59.219065   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:40:59.219251   22579 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0528 20:40:59.219267   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetState
	I0528 20:40:59.220625   22579 main.go:141] libmachine: Detecting operating system of created instance...
	I0528 20:40:59.220639   22579 main.go:141] libmachine: Waiting for SSH to be available...
	I0528 20:40:59.220644   22579 main.go:141] libmachine: Getting to WaitForSSH function...
	I0528 20:40:59.220650   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:40:59.222765   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.223152   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.223181   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.223366   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:40:59.223559   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.223699   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.223852   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:40:59.224054   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:40:59.224236   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0528 20:40:59.224247   22579 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0528 20:40:59.337067   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:40:59.337086   22579 main.go:141] libmachine: Detecting the provisioner...
	I0528 20:40:59.337094   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:40:59.339822   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.340220   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.340249   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.340378   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:40:59.340608   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.340739   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.340863   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:40:59.341022   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:40:59.341251   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0528 20:40:59.341265   22579 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0528 20:40:59.454410   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0528 20:40:59.454467   22579 main.go:141] libmachine: found compatible host: buildroot
	I0528 20:40:59.454477   22579 main.go:141] libmachine: Provisioning with buildroot...
	I0528 20:40:59.454491   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetMachineName
	I0528 20:40:59.454715   22579 buildroot.go:166] provisioning hostname "ha-908878-m03"
	I0528 20:40:59.454738   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetMachineName
	I0528 20:40:59.454931   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:40:59.457481   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.457908   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.457937   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.457996   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:40:59.458153   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.458298   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.458446   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:40:59.458613   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:40:59.458769   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0528 20:40:59.458781   22579 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-908878-m03 && echo "ha-908878-m03" | sudo tee /etc/hostname
	I0528 20:40:59.585371   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-908878-m03
	
	I0528 20:40:59.585412   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:40:59.587939   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.588326   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.588357   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.588503   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:40:59.588763   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.588952   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.589112   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:40:59.589291   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:40:59.589493   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0528 20:40:59.589518   22579 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-908878-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-908878-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-908878-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 20:40:59.711306   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:40:59.711331   22579 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 20:40:59.711345   22579 buildroot.go:174] setting up certificates
	I0528 20:40:59.711355   22579 provision.go:84] configureAuth start
	I0528 20:40:59.711367   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetMachineName
	I0528 20:40:59.711644   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:40:59.714387   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.714764   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.714793   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.714910   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:40:59.717214   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.717616   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.717644   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.717798   22579 provision.go:143] copyHostCerts
	I0528 20:40:59.717830   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:40:59.717868   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 20:40:59.717880   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:40:59.717959   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 20:40:59.718054   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:40:59.718078   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 20:40:59.718087   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:40:59.718123   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 20:40:59.718190   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:40:59.718215   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 20:40:59.718224   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:40:59.718266   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 20:40:59.718354   22579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.ha-908878-m03 san=[127.0.0.1 192.168.39.73 ha-908878-m03 localhost minikube]
	I0528 20:40:59.898087   22579 provision.go:177] copyRemoteCerts
	I0528 20:40:59.898139   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 20:40:59.898161   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:40:59.900892   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.901581   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.901614   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.901792   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:40:59.901976   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.902108   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:40:59.902249   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:40:59.988393   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0528 20:40:59.988475   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 20:41:00.012880   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0528 20:41:00.012967   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 20:41:00.036809   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0528 20:41:00.036890   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 20:41:00.067715   22579 provision.go:87] duration metric: took 356.347821ms to configureAuth
	I0528 20:41:00.067746   22579 buildroot.go:189] setting minikube options for container-runtime
	I0528 20:41:00.067971   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:41:00.068060   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:41:00.070792   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.071208   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.071237   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.071394   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:41:00.071606   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.071775   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.071896   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:41:00.072116   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:41:00.072269   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0528 20:41:00.072283   22579 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 20:41:00.354424   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 20:41:00.354456   22579 main.go:141] libmachine: Checking connection to Docker...
	I0528 20:41:00.354469   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetURL
	I0528 20:41:00.355955   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Using libvirt version 6000000
	I0528 20:41:00.358290   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.358680   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.358711   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.358865   22579 main.go:141] libmachine: Docker is up and running!
	I0528 20:41:00.358877   22579 main.go:141] libmachine: Reticulating splines...
	I0528 20:41:00.358883   22579 client.go:171] duration metric: took 28.031799176s to LocalClient.Create
	I0528 20:41:00.358904   22579 start.go:167] duration metric: took 28.031853438s to libmachine.API.Create "ha-908878"
	I0528 20:41:00.358916   22579 start.go:293] postStartSetup for "ha-908878-m03" (driver="kvm2")
	I0528 20:41:00.358932   22579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 20:41:00.358953   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:41:00.359201   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 20:41:00.359221   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:41:00.361345   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.361700   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.361728   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.361893   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:41:00.362095   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.362258   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:41:00.362396   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:41:00.448222   22579 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 20:41:00.452456   22579 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 20:41:00.452477   22579 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 20:41:00.452536   22579 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 20:41:00.452601   22579 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 20:41:00.452610   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /etc/ssl/certs/117602.pem
	I0528 20:41:00.452684   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 20:41:00.462901   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:41:00.488691   22579 start.go:296] duration metric: took 129.762748ms for postStartSetup
	I0528 20:41:00.488733   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetConfigRaw
	I0528 20:41:00.489250   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:41:00.491626   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.491981   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.492008   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.492250   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:41:00.492428   22579 start.go:128] duration metric: took 28.183249732s to createHost
	I0528 20:41:00.492449   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:41:00.494554   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.494899   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.494920   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.495085   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:41:00.495257   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.495411   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.495596   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:41:00.495738   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:41:00.495905   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0528 20:41:00.495922   22579 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 20:41:00.606282   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716928860.588131919
	
	I0528 20:41:00.606299   22579 fix.go:216] guest clock: 1716928860.588131919
	I0528 20:41:00.606306   22579 fix.go:229] Guest: 2024-05-28 20:41:00.588131919 +0000 UTC Remote: 2024-05-28 20:41:00.492438726 +0000 UTC m=+152.016726426 (delta=95.693193ms)
	I0528 20:41:00.606319   22579 fix.go:200] guest clock delta is within tolerance: 95.693193ms
	I0528 20:41:00.606324   22579 start.go:83] releasing machines lock for "ha-908878-m03", held for 28.297252585s
	I0528 20:41:00.606341   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:41:00.606568   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:41:00.609116   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.609475   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.609503   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.611857   22579 out.go:177] * Found network options:
	I0528 20:41:00.613264   22579 out.go:177]   - NO_PROXY=192.168.39.100,192.168.39.239
	W0528 20:41:00.614453   22579 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 20:41:00.614480   22579 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 20:41:00.614496   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:41:00.614990   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:41:00.615163   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:41:00.615264   22579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 20:41:00.615306   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	W0528 20:41:00.615347   22579 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 20:41:00.615372   22579 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 20:41:00.615437   22579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 20:41:00.615458   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:41:00.617989   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.618208   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.618411   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.618439   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.618608   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:41:00.618756   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.618766   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.618786   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.618928   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:41:00.618946   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:41:00.619096   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.619088   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:41:00.619222   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:41:00.619353   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:41:00.856279   22579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 20:41:00.862437   22579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 20:41:00.862494   22579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 20:41:00.879166   22579 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 20:41:00.879190   22579 start.go:494] detecting cgroup driver to use...
	I0528 20:41:00.879252   22579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 20:41:00.896017   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 20:41:00.909602   22579 docker.go:217] disabling cri-docker service (if available) ...
	I0528 20:41:00.909651   22579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 20:41:00.924954   22579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 20:41:00.940065   22579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 20:41:01.053520   22579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 20:41:01.204877   22579 docker.go:233] disabling docker service ...
	I0528 20:41:01.204948   22579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 20:41:01.220221   22579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 20:41:01.233164   22579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 20:41:01.370367   22579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 20:41:01.495497   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 20:41:01.510142   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 20:41:01.529604   22579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 20:41:01.529668   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.540330   22579 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 20:41:01.540390   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.551028   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.561469   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.572897   22579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 20:41:01.584697   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.597498   22579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.618112   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.629331   22579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 20:41:01.639391   22579 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 20:41:01.639445   22579 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 20:41:01.652370   22579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 20:41:01.662436   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:41:01.792319   22579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 20:41:01.928887   22579 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 20:41:01.928968   22579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 20:41:01.933740   22579 start.go:562] Will wait 60s for crictl version
	I0528 20:41:01.933809   22579 ssh_runner.go:195] Run: which crictl
	I0528 20:41:01.937541   22579 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 20:41:01.976649   22579 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 20:41:01.976735   22579 ssh_runner.go:195] Run: crio --version
	I0528 20:41:02.005833   22579 ssh_runner.go:195] Run: crio --version
	I0528 20:41:02.037660   22579 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 20:41:02.038815   22579 out.go:177]   - env NO_PROXY=192.168.39.100
	I0528 20:41:02.040107   22579 out.go:177]   - env NO_PROXY=192.168.39.100,192.168.39.239
	I0528 20:41:02.041333   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:41:02.043750   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:02.044044   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:02.044076   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:02.044253   22579 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 20:41:02.048567   22579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:41:02.062479   22579 mustload.go:65] Loading cluster: ha-908878
	I0528 20:41:02.062721   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:41:02.063015   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:41:02.063055   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:41:02.077127   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37145
	I0528 20:41:02.077499   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:41:02.077990   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:41:02.078012   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:41:02.078321   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:41:02.078511   22579 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:41:02.079938   22579 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:41:02.080215   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:41:02.080246   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:41:02.094090   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39311
	I0528 20:41:02.094479   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:41:02.094947   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:41:02.094964   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:41:02.095254   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:41:02.095454   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:41:02.095624   22579 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878 for IP: 192.168.39.73
	I0528 20:41:02.095633   22579 certs.go:194] generating shared ca certs ...
	I0528 20:41:02.095645   22579 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:41:02.095771   22579 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 20:41:02.095830   22579 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 20:41:02.095843   22579 certs.go:256] generating profile certs ...
	I0528 20:41:02.095930   22579 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key
	I0528 20:41:02.095960   22579 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.25750a69
	I0528 20:41:02.095977   22579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.25750a69 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.239 192.168.39.73 192.168.39.254]
	I0528 20:41:02.254924   22579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.25750a69 ...
	I0528 20:41:02.254954   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.25750a69: {Name:mk58313499148b52ec97dc34165b38b9ed8d227b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:41:02.255108   22579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.25750a69 ...
	I0528 20:41:02.255122   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.25750a69: {Name:mk956dafa3c18b705956b9d3cb0dd665fa1d7a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:41:02.255189   22579 certs.go:381] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.25750a69 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt
	I0528 20:41:02.255315   22579 certs.go:385] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.25750a69 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key
	I0528 20:41:02.255428   22579 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key
	I0528 20:41:02.255441   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 20:41:02.255453   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0528 20:41:02.255464   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 20:41:02.255479   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 20:41:02.255494   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 20:41:02.255506   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 20:41:02.255518   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 20:41:02.255531   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 20:41:02.255578   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 20:41:02.255604   22579 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 20:41:02.255613   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 20:41:02.255633   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 20:41:02.255654   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 20:41:02.255676   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 20:41:02.255711   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:41:02.255735   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /usr/share/ca-certificates/117602.pem
	I0528 20:41:02.255749   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:41:02.255760   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem -> /usr/share/ca-certificates/11760.pem
	I0528 20:41:02.255789   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:41:02.258851   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:41:02.259277   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:41:02.259304   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:41:02.259475   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:41:02.259647   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:41:02.259760   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:41:02.259855   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:41:02.338012   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0528 20:41:02.343982   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0528 20:41:02.355493   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0528 20:41:02.359726   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0528 20:41:02.370384   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0528 20:41:02.375348   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0528 20:41:02.387184   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0528 20:41:02.391211   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0528 20:41:02.402117   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0528 20:41:02.407250   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0528 20:41:02.420121   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0528 20:41:02.424452   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0528 20:41:02.435455   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 20:41:02.462917   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 20:41:02.488517   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 20:41:02.511647   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 20:41:02.533936   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0528 20:41:02.556162   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 20:41:02.578549   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 20:41:02.601950   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 20:41:02.627962   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 20:41:02.652566   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 20:41:02.678156   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 20:41:02.702155   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0528 20:41:02.718360   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0528 20:41:02.736350   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0528 20:41:02.752301   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0528 20:41:02.768517   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0528 20:41:02.784318   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0528 20:41:02.799999   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0528 20:41:02.815934   22579 ssh_runner.go:195] Run: openssl version
	I0528 20:41:02.821967   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 20:41:02.834372   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 20:41:02.839042   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 20:41:02.839089   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 20:41:02.845026   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 20:41:02.857373   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 20:41:02.870549   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:41:02.875252   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:41:02.875319   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:41:02.881064   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 20:41:02.892281   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 20:41:02.903533   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 20:41:02.907870   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 20:41:02.907922   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 20:41:02.913242   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 20:41:02.925437   22579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 20:41:02.929789   22579 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 20:41:02.929847   22579 kubeadm.go:928] updating node {m03 192.168.39.73 8443 v1.30.1 crio true true} ...
	I0528 20:41:02.929930   22579 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-908878-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 20:41:02.929965   22579 kube-vip.go:115] generating kube-vip config ...
	I0528 20:41:02.929993   22579 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 20:41:02.945244   22579 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 20:41:02.945296   22579 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0528 20:41:02.945338   22579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 20:41:02.955092   22579 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0528 20:41:02.955149   22579 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0528 20:41:02.964780   22579 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0528 20:41:02.964801   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 20:41:02.964812   22579 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0528 20:41:02.964836   22579 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0528 20:41:02.964855   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:41:02.964856   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 20:41:02.964872   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 20:41:02.964917   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 20:41:02.969140   22579 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0528 20:41:02.969162   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0528 20:41:03.010500   22579 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0528 20:41:03.010507   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 20:41:03.010560   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0528 20:41:03.010643   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 20:41:03.057961   22579 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0528 20:41:03.058002   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0528 20:41:03.905521   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0528 20:41:03.916413   22579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0528 20:41:03.933857   22579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 20:41:03.950796   22579 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0528 20:41:03.969238   22579 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0528 20:41:03.973578   22579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:41:03.987320   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:41:04.124115   22579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:41:04.141725   22579 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:41:04.142097   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:41:04.142137   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:41:04.157653   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36039
	I0528 20:41:04.158148   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:41:04.158681   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:41:04.158706   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:41:04.158998   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:41:04.159375   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:41:04.159565   22579 start.go:316] joinCluster: &{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:41:04.159677   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0528 20:41:04.159692   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:41:04.162581   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:41:04.162955   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:41:04.162982   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:41:04.163126   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:41:04.163302   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:41:04.163464   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:41:04.163593   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:41:04.328854   22579 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:41:04.328907   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k1hlwe.i66bv2ctvga46c3g --discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-908878-m03 --control-plane --apiserver-advertise-address=192.168.39.73 --apiserver-bind-port=8443"
	I0528 20:41:27.532526   22579 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k1hlwe.i66bv2ctvga46c3g --discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-908878-m03 --control-plane --apiserver-advertise-address=192.168.39.73 --apiserver-bind-port=8443": (23.203579275s)
	I0528 20:41:27.532567   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0528 20:41:28.045867   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-908878-m03 minikube.k8s.io/updated_at=2024_05_28T20_41_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=ha-908878 minikube.k8s.io/primary=false
	I0528 20:41:28.166277   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-908878-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0528 20:41:28.280172   22579 start.go:318] duration metric: took 24.120602222s to joinCluster
	I0528 20:41:28.280242   22579 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:41:28.281519   22579 out.go:177] * Verifying Kubernetes components...
	I0528 20:41:28.280514   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:41:28.282678   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:41:28.558017   22579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:41:28.575792   22579 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:41:28.576116   22579 kapi.go:59] client config for ha-908878: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.crt", KeyFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key", CAFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf8220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0528 20:41:28.576202   22579 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.100:8443
	I0528 20:41:28.576472   22579 node_ready.go:35] waiting up to 6m0s for node "ha-908878-m03" to be "Ready" ...
	I0528 20:41:28.576551   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:28.576561   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:28.576573   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:28.576582   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:28.581244   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:29.076650   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:29.076679   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:29.076689   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:29.076694   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:29.080062   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:29.576973   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:29.577002   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:29.577013   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:29.577019   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:29.580386   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:30.077582   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:30.077602   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:30.077608   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:30.077612   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:30.080333   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:30.577164   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:30.577189   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:30.577201   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:30.577206   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:30.580013   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:30.580826   22579 node_ready.go:53] node "ha-908878-m03" has status "Ready":"False"
	I0528 20:41:31.076834   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:31.076858   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:31.076869   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:31.076876   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:31.080069   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:31.577469   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:31.577497   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:31.577507   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:31.577513   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:31.581059   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:32.076832   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:32.076855   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:32.076865   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:32.076871   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:32.081751   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:32.577063   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:32.577086   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:32.577093   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:32.577097   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:32.582036   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:32.583242   22579 node_ready.go:53] node "ha-908878-m03" has status "Ready":"False"
	I0528 20:41:33.077662   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:33.077685   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:33.077693   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:33.077697   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:33.081149   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:33.577516   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:33.577538   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:33.577548   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:33.577552   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:33.582083   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:34.077616   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:34.077638   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:34.077648   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:34.077655   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:34.081428   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:34.577011   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:34.577035   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:34.577043   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:34.577050   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:34.580350   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:35.077384   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:35.077404   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.077429   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.077433   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.080731   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:35.081469   22579 node_ready.go:49] node "ha-908878-m03" has status "Ready":"True"
	I0528 20:41:35.081490   22579 node_ready.go:38] duration metric: took 6.504999349s for node "ha-908878-m03" to be "Ready" ...
	I0528 20:41:35.081498   22579 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:41:35.081546   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:41:35.081555   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.081562   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.081567   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.087521   22579 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 20:41:35.093456   22579 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5fmns" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.093524   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5fmns
	I0528 20:41:35.093529   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.093535   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.093538   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.096612   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:35.097689   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:35.097703   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.097710   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.097713   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.100145   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:35.100788   22579 pod_ready.go:92] pod "coredns-7db6d8ff4d-5fmns" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:35.100804   22579 pod_ready.go:81] duration metric: took 7.327582ms for pod "coredns-7db6d8ff4d-5fmns" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.100811   22579 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mvx67" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.100855   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mvx67
	I0528 20:41:35.100863   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.100869   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.100873   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.103504   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:35.104108   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:35.104124   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.104131   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.104134   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.106626   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:35.107168   22579 pod_ready.go:92] pod "coredns-7db6d8ff4d-mvx67" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:35.107186   22579 pod_ready.go:81] duration metric: took 6.368888ms for pod "coredns-7db6d8ff4d-mvx67" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.107199   22579 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.107261   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878
	I0528 20:41:35.107274   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.107284   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.107289   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.109851   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:35.110371   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:35.110384   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.110391   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.110395   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.113062   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:35.113587   22579 pod_ready.go:92] pod "etcd-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:35.113602   22579 pod_ready.go:81] duration metric: took 6.39665ms for pod "etcd-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.113609   22579 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.113645   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:41:35.113652   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.113658   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.113662   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.116849   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:35.117944   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:35.117960   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.117971   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.117977   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.120520   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:35.120945   22579 pod_ready.go:92] pod "etcd-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:35.120959   22579 pod_ready.go:81] duration metric: took 7.345393ms for pod "etcd-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.120967   22579 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.278365   22579 request.go:629] Waited for 157.321448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:35.278446   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:35.278455   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.278462   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.278469   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.281934   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:35.478332   22579 request.go:629] Waited for 195.274194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:35.478388   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:35.478393   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.478400   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.478408   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.482490   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:35.677814   22579 request.go:629] Waited for 56.219595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:35.677881   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:35.677888   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.677902   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.677911   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.682013   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:35.878393   22579 request.go:629] Waited for 195.365934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:35.878445   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:35.878450   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.878457   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.878470   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.881747   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:36.121555   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:36.121595   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:36.121606   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:36.121612   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:36.124169   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:36.278240   22579 request.go:629] Waited for 153.312957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:36.278307   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:36.278314   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:36.278324   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:36.278333   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:36.282054   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:36.621744   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:36.621777   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:36.621783   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:36.621785   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:36.624904   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:36.677981   22579 request.go:629] Waited for 52.231287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:36.678067   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:36.678079   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:36.678090   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:36.678095   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:36.680792   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:37.121256   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:37.121276   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:37.121284   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:37.121288   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:37.124591   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:37.125280   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:37.125295   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:37.125302   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:37.125307   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:37.127810   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:37.128328   22579 pod_ready.go:102] pod "etcd-ha-908878-m03" in "kube-system" namespace has status "Ready":"False"
	I0528 20:41:37.621136   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:37.621157   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:37.621164   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:37.621169   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:37.624348   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:37.625160   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:37.625175   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:37.625182   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:37.625189   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:37.627751   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:38.121329   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:38.121357   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:38.121384   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:38.121389   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:38.126397   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:38.127043   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:38.127060   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:38.127068   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:38.127071   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:38.129636   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:38.621865   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:38.621886   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:38.621893   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:38.621898   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:38.624799   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:38.625730   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:38.625744   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:38.625751   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:38.625755   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:38.628377   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:39.121835   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:39.121856   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:39.121864   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:39.121869   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:39.124910   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:39.125636   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:39.125653   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:39.125663   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:39.125669   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:39.128117   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:39.128678   22579 pod_ready.go:102] pod "etcd-ha-908878-m03" in "kube-system" namespace has status "Ready":"False"
	I0528 20:41:39.622027   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:39.622052   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:39.622065   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:39.622070   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:39.625337   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:39.626327   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:39.626344   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:39.626351   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:39.626354   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:39.628950   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:40.121997   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:40.122023   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:40.122034   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:40.122040   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:40.125013   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:40.125637   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:40.125654   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:40.125663   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:40.125668   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:40.129110   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:40.621278   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:40.621297   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:40.621305   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:40.621311   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:40.624393   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:40.625284   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:40.625302   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:40.625310   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:40.625316   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:40.630402   22579 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 20:41:41.122042   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:41.122065   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:41.122076   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:41.122081   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:41.126202   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:41.126901   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:41.126919   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:41.126929   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:41.126935   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:41.130537   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:41.131125   22579 pod_ready.go:102] pod "etcd-ha-908878-m03" in "kube-system" namespace has status "Ready":"False"
	I0528 20:41:41.621967   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:41.621995   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:41.622013   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:41.622019   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:41.624840   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:41.625425   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:41.625439   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:41.625445   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:41.625449   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:41.628217   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:42.121149   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:42.121170   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:42.121177   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:42.121181   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:42.124086   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:42.125061   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:42.125074   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:42.125081   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:42.125084   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:42.129416   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:42.621323   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:42.621348   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:42.621359   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:42.621365   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:42.625262   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:42.626006   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:42.626021   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:42.626028   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:42.626031   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:42.628611   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:43.121573   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:43.121605   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:43.121613   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:43.121616   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:43.124869   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:43.125691   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:43.125705   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:43.125712   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:43.125716   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:43.128245   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:43.621547   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:43.621577   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:43.621587   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:43.621590   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:43.625259   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:43.625865   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:43.625881   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:43.625888   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:43.625892   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:43.628340   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:43.628869   22579 pod_ready.go:102] pod "etcd-ha-908878-m03" in "kube-system" namespace has status "Ready":"False"
	I0528 20:41:44.121837   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:44.121866   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.121878   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.121885   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.125024   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:44.125895   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:44.125915   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.125924   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.125928   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.128451   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.621119   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:44.621139   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.621147   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.621150   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.624180   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:44.624972   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:44.624992   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.625002   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.625010   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.627772   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.628401   22579 pod_ready.go:92] pod "etcd-ha-908878-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:44.628421   22579 pod_ready.go:81] duration metric: took 9.50744498s for pod "etcd-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.628441   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.628511   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-908878
	I0528 20:41:44.628525   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.628535   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.628544   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.631158   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.631744   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:44.631761   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.631768   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.631772   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.634025   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.634480   22579 pod_ready.go:92] pod "kube-apiserver-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:44.634497   22579 pod_ready.go:81] duration metric: took 6.044261ms for pod "kube-apiserver-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.634507   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.634565   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-908878-m02
	I0528 20:41:44.634576   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.634586   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.634596   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.636672   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.637258   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:44.637273   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.637280   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.637284   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.639578   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.640142   22579 pod_ready.go:92] pod "kube-apiserver-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:44.640158   22579 pod_ready.go:81] duration metric: took 5.643738ms for pod "kube-apiserver-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.640166   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.640216   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-908878-m03
	I0528 20:41:44.640224   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.640230   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.640237   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.642688   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.643440   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:44.643453   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.643460   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.643464   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.646255   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.646798   22579 pod_ready.go:92] pod "kube-apiserver-ha-908878-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:44.646821   22579 pod_ready.go:81] duration metric: took 6.642368ms for pod "kube-apiserver-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.646832   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.646883   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878
	I0528 20:41:44.646893   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.646904   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.646914   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.650103   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:44.677820   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:44.677834   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.677842   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.677846   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.680523   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.680918   22579 pod_ready.go:92] pod "kube-controller-manager-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:44.680933   22579 pod_ready.go:81] duration metric: took 34.091199ms for pod "kube-controller-manager-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.680953   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.878394   22579 request.go:629] Waited for 197.354576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878-m02
	I0528 20:41:44.878465   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878-m02
	I0528 20:41:44.878472   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.878482   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.878488   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.881733   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:45.077835   22579 request.go:629] Waited for 195.319662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:45.077923   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:45.077934   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:45.077945   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:45.077952   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:45.081869   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:45.082908   22579 pod_ready.go:92] pod "kube-controller-manager-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:45.082930   22579 pod_ready.go:81] duration metric: took 401.970164ms for pod "kube-controller-manager-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:45.082943   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:45.278017   22579 request.go:629] Waited for 194.999461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878-m03
	I0528 20:41:45.278102   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878-m03
	I0528 20:41:45.278111   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:45.278122   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:45.278143   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:45.281456   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:45.478467   22579 request.go:629] Waited for 196.368725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:45.478518   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:45.478523   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:45.478530   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:45.478535   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:45.481621   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:45.482212   22579 pod_ready.go:92] pod "kube-controller-manager-ha-908878-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:45.482230   22579 pod_ready.go:81] duration metric: took 399.279724ms for pod "kube-controller-manager-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:45.482240   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4vjp6" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:45.678355   22579 request.go:629] Waited for 196.03886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4vjp6
	I0528 20:41:45.678412   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4vjp6
	I0528 20:41:45.678418   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:45.678426   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:45.678430   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:45.681644   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:45.877836   22579 request.go:629] Waited for 195.316455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:45.877906   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:45.877913   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:45.877920   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:45.877926   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:45.880825   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:45.881470   22579 pod_ready.go:92] pod "kube-proxy-4vjp6" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:45.881490   22579 pod_ready.go:81] duration metric: took 399.243929ms for pod "kube-proxy-4vjp6" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:45.881504   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ng8mq" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:46.077466   22579 request.go:629] Waited for 195.898762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ng8mq
	I0528 20:41:46.077557   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ng8mq
	I0528 20:41:46.077568   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:46.077575   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:46.077579   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:46.080532   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:46.277396   22579 request.go:629] Waited for 196.114941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:46.277447   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:46.277454   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:46.277462   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:46.277469   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:46.280545   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:46.281104   22579 pod_ready.go:92] pod "kube-proxy-ng8mq" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:46.281121   22579 pod_ready.go:81] duration metric: took 399.610916ms for pod "kube-proxy-ng8mq" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:46.281130   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pg89k" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:46.478401   22579 request.go:629] Waited for 197.207302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pg89k
	I0528 20:41:46.478448   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pg89k
	I0528 20:41:46.478453   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:46.478463   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:46.478470   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:46.481950   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:46.678208   22579 request.go:629] Waited for 195.338894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:46.678279   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:46.678284   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:46.678292   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:46.678300   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:46.681777   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:46.682526   22579 pod_ready.go:92] pod "kube-proxy-pg89k" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:46.682545   22579 pod_ready.go:81] duration metric: took 401.409669ms for pod "kube-proxy-pg89k" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:46.682554   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:46.877586   22579 request.go:629] Waited for 194.974945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878
	I0528 20:41:46.877640   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878
	I0528 20:41:46.877646   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:46.877654   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:46.877659   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:46.880932   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:47.078104   22579 request.go:629] Waited for 196.356071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:47.078162   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:47.078177   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:47.078189   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:47.078205   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:47.081375   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:47.082233   22579 pod_ready.go:92] pod "kube-scheduler-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:47.082256   22579 pod_ready.go:81] duration metric: took 399.695122ms for pod "kube-scheduler-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:47.082269   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:47.277946   22579 request.go:629] Waited for 195.584259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878-m02
	I0528 20:41:47.278014   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878-m02
	I0528 20:41:47.278020   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:47.278027   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:47.278031   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:47.281661   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:47.477823   22579 request.go:629] Waited for 195.407332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:47.477899   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:47.477910   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:47.477921   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:47.477932   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:47.481276   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:47.481960   22579 pod_ready.go:92] pod "kube-scheduler-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:47.481979   22579 pod_ready.go:81] duration metric: took 399.698411ms for pod "kube-scheduler-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:47.481991   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:47.678063   22579 request.go:629] Waited for 196.000158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878-m03
	I0528 20:41:47.678139   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878-m03
	I0528 20:41:47.678146   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:47.678157   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:47.678169   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:47.681293   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:47.878397   22579 request.go:629] Waited for 196.378653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:47.878468   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:47.878476   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:47.878487   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:47.878493   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:47.881699   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:47.882219   22579 pod_ready.go:92] pod "kube-scheduler-ha-908878-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:47.882236   22579 pod_ready.go:81] duration metric: took 400.237383ms for pod "kube-scheduler-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:47.882248   22579 pod_ready.go:38] duration metric: took 12.800741549s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:41:47.882266   22579 api_server.go:52] waiting for apiserver process to appear ...
	I0528 20:41:47.882312   22579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:41:47.898554   22579 api_server.go:72] duration metric: took 19.618274134s to wait for apiserver process to appear ...
	I0528 20:41:47.898575   22579 api_server.go:88] waiting for apiserver healthz status ...
	I0528 20:41:47.898594   22579 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0528 20:41:47.903138   22579 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0528 20:41:47.903203   22579 round_trippers.go:463] GET https://192.168.39.100:8443/version
	I0528 20:41:47.903214   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:47.903225   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:47.903233   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:47.904161   22579 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 20:41:47.904277   22579 api_server.go:141] control plane version: v1.30.1
	I0528 20:41:47.904296   22579 api_server.go:131] duration metric: took 5.714061ms to wait for apiserver health ...
	I0528 20:41:47.904306   22579 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 20:41:48.077697   22579 request.go:629] Waited for 173.320136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:41:48.077803   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:41:48.077814   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:48.077823   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:48.077830   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:48.085436   22579 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 20:41:48.091810   22579 system_pods.go:59] 24 kube-system pods found
	I0528 20:41:48.091834   22579 system_pods.go:61] "coredns-7db6d8ff4d-5fmns" [41a3bda1-29ba-4982-baf5-0adc97b4eb45] Running
	I0528 20:41:48.091839   22579 system_pods.go:61] "coredns-7db6d8ff4d-mvx67" [0b51beb7-0397-4008-b878-97edd41c6b94] Running
	I0528 20:41:48.091843   22579 system_pods.go:61] "etcd-ha-908878" [4cfaba35-0bd9-476b-95c2-abd111c4fcac] Running
	I0528 20:41:48.091847   22579 system_pods.go:61] "etcd-ha-908878-m02" [cb4f24be-dbf9-4c42-9a55-29cf6f0b6ecc] Running
	I0528 20:41:48.091850   22579 system_pods.go:61] "etcd-ha-908878-m03" [e38e6404-063e-4b01-8079-395f96aa2036] Running
	I0528 20:41:48.091853   22579 system_pods.go:61] "kindnet-6prxw" [77fae8b9-3abd-4a39-81ec-cc782b891331] Running
	I0528 20:41:48.091856   22579 system_pods.go:61] "kindnet-fx2nj" [9d024f44-b6fe-4390-8b26-2f29f4fd5cdf] Running
	I0528 20:41:48.091859   22579 system_pods.go:61] "kindnet-x4mzh" [8069a7ea-0ab1-4064-b982-867dbdfd97aa] Running
	I0528 20:41:48.091862   22579 system_pods.go:61] "kube-apiserver-ha-908878" [ff63f2af-3fc5-496c-b468-7447defad5e6] Running
	I0528 20:41:48.091866   22579 system_pods.go:61] "kube-apiserver-ha-908878-m02" [3a56592b-67cd-44d0-8907-2a62d4a6c671] Running
	I0528 20:41:48.091869   22579 system_pods.go:61] "kube-apiserver-ha-908878-m03" [3b396a1d-9d28-469b-bddf-3a208c197207] Running
	I0528 20:41:48.091872   22579 system_pods.go:61] "kube-controller-manager-ha-908878" [e426060f-307d-41c7-8fb9-ab48709ce2a8] Running
	I0528 20:41:48.091876   22579 system_pods.go:61] "kube-controller-manager-ha-908878-m02" [232c3f41-5ba8-4fdf-848a-f8fb92f33a73] Running
	I0528 20:41:48.091879   22579 system_pods.go:61] "kube-controller-manager-ha-908878-m03" [43b1b03f-a6b5-4de9-afeb-6f488f3bd89e] Running
	I0528 20:41:48.091882   22579 system_pods.go:61] "kube-proxy-4vjp6" [142b5612-0c6b-4aa8-9410-646f2e2812bc] Running
	I0528 20:41:48.091885   22579 system_pods.go:61] "kube-proxy-ng8mq" [ca0b1264-09c7-44b2-ba8c-e145e825fdbe] Running
	I0528 20:41:48.091888   22579 system_pods.go:61] "kube-proxy-pg89k" [6eeda2cd-7b9e-440f-a8c3-c2ea8015106d] Running
	I0528 20:41:48.091891   22579 system_pods.go:61] "kube-scheduler-ha-908878" [7a9859a9-e92c-435b-a70e-5200f67d9589] Running
	I0528 20:41:48.091895   22579 system_pods.go:61] "kube-scheduler-ha-908878-m02" [c03b5557-cdca-4d39-800e-51a3a4f180b7] Running
	I0528 20:41:48.091898   22579 system_pods.go:61] "kube-scheduler-ha-908878-m03" [4699c008-ffdd-447b-a1b1-dc7776b60190] Running
	I0528 20:41:48.091901   22579 system_pods.go:61] "kube-vip-ha-908878" [45e2e6e2-6ea1-4498-aca1-9c3060eb9ca4] Running
	I0528 20:41:48.091904   22579 system_pods.go:61] "kube-vip-ha-908878-m02" [bcbc54fb-d0d4-422a-9e42-d61cd3f390ff] Running
	I0528 20:41:48.091911   22579 system_pods.go:61] "kube-vip-ha-908878-m03" [f1de9ce4-67d2-47ab-8a24-6766c35a73b9] Running
	I0528 20:41:48.091915   22579 system_pods.go:61] "storage-provisioner" [d79872e2-b267-446a-99dc-5bf9f398d31c] Running
	I0528 20:41:48.091920   22579 system_pods.go:74] duration metric: took 187.608951ms to wait for pod list to return data ...
	I0528 20:41:48.091934   22579 default_sa.go:34] waiting for default service account to be created ...
	I0528 20:41:48.278338   22579 request.go:629] Waited for 186.34176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0528 20:41:48.278399   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0528 20:41:48.278412   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:48.278423   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:48.278432   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:48.282323   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:48.282422   22579 default_sa.go:45] found service account: "default"
	I0528 20:41:48.282434   22579 default_sa.go:55] duration metric: took 190.495296ms for default service account to be created ...
	I0528 20:41:48.282442   22579 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 20:41:48.477831   22579 request.go:629] Waited for 195.307744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:41:48.477891   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:41:48.477896   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:48.477906   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:48.477911   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:48.488660   22579 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0528 20:41:48.494939   22579 system_pods.go:86] 24 kube-system pods found
	I0528 20:41:48.494964   22579 system_pods.go:89] "coredns-7db6d8ff4d-5fmns" [41a3bda1-29ba-4982-baf5-0adc97b4eb45] Running
	I0528 20:41:48.494972   22579 system_pods.go:89] "coredns-7db6d8ff4d-mvx67" [0b51beb7-0397-4008-b878-97edd41c6b94] Running
	I0528 20:41:48.494979   22579 system_pods.go:89] "etcd-ha-908878" [4cfaba35-0bd9-476b-95c2-abd111c4fcac] Running
	I0528 20:41:48.494985   22579 system_pods.go:89] "etcd-ha-908878-m02" [cb4f24be-dbf9-4c42-9a55-29cf6f0b6ecc] Running
	I0528 20:41:48.494995   22579 system_pods.go:89] "etcd-ha-908878-m03" [e38e6404-063e-4b01-8079-395f96aa2036] Running
	I0528 20:41:48.495001   22579 system_pods.go:89] "kindnet-6prxw" [77fae8b9-3abd-4a39-81ec-cc782b891331] Running
	I0528 20:41:48.495007   22579 system_pods.go:89] "kindnet-fx2nj" [9d024f44-b6fe-4390-8b26-2f29f4fd5cdf] Running
	I0528 20:41:48.495014   22579 system_pods.go:89] "kindnet-x4mzh" [8069a7ea-0ab1-4064-b982-867dbdfd97aa] Running
	I0528 20:41:48.495027   22579 system_pods.go:89] "kube-apiserver-ha-908878" [ff63f2af-3fc5-496c-b468-7447defad5e6] Running
	I0528 20:41:48.495042   22579 system_pods.go:89] "kube-apiserver-ha-908878-m02" [3a56592b-67cd-44d0-8907-2a62d4a6c671] Running
	I0528 20:41:48.495048   22579 system_pods.go:89] "kube-apiserver-ha-908878-m03" [3b396a1d-9d28-469b-bddf-3a208c197207] Running
	I0528 20:41:48.495056   22579 system_pods.go:89] "kube-controller-manager-ha-908878" [e426060f-307d-41c7-8fb9-ab48709ce2a8] Running
	I0528 20:41:48.495065   22579 system_pods.go:89] "kube-controller-manager-ha-908878-m02" [232c3f41-5ba8-4fdf-848a-f8fb92f33a73] Running
	I0528 20:41:48.495077   22579 system_pods.go:89] "kube-controller-manager-ha-908878-m03" [43b1b03f-a6b5-4de9-afeb-6f488f3bd89e] Running
	I0528 20:41:48.495084   22579 system_pods.go:89] "kube-proxy-4vjp6" [142b5612-0c6b-4aa8-9410-646f2e2812bc] Running
	I0528 20:41:48.495094   22579 system_pods.go:89] "kube-proxy-ng8mq" [ca0b1264-09c7-44b2-ba8c-e145e825fdbe] Running
	I0528 20:41:48.495101   22579 system_pods.go:89] "kube-proxy-pg89k" [6eeda2cd-7b9e-440f-a8c3-c2ea8015106d] Running
	I0528 20:41:48.495111   22579 system_pods.go:89] "kube-scheduler-ha-908878" [7a9859a9-e92c-435b-a70e-5200f67d9589] Running
	I0528 20:41:48.495119   22579 system_pods.go:89] "kube-scheduler-ha-908878-m02" [c03b5557-cdca-4d39-800e-51a3a4f180b7] Running
	I0528 20:41:48.495129   22579 system_pods.go:89] "kube-scheduler-ha-908878-m03" [4699c008-ffdd-447b-a1b1-dc7776b60190] Running
	I0528 20:41:48.495136   22579 system_pods.go:89] "kube-vip-ha-908878" [45e2e6e2-6ea1-4498-aca1-9c3060eb9ca4] Running
	I0528 20:41:48.495145   22579 system_pods.go:89] "kube-vip-ha-908878-m02" [bcbc54fb-d0d4-422a-9e42-d61cd3f390ff] Running
	I0528 20:41:48.495152   22579 system_pods.go:89] "kube-vip-ha-908878-m03" [f1de9ce4-67d2-47ab-8a24-6766c35a73b9] Running
	I0528 20:41:48.495161   22579 system_pods.go:89] "storage-provisioner" [d79872e2-b267-446a-99dc-5bf9f398d31c] Running
	I0528 20:41:48.495171   22579 system_pods.go:126] duration metric: took 212.720492ms to wait for k8s-apps to be running ...
	I0528 20:41:48.495183   22579 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 20:41:48.495230   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:41:48.512049   22579 system_svc.go:56] duration metric: took 16.837316ms WaitForService to wait for kubelet
	I0528 20:41:48.512080   22579 kubeadm.go:576] duration metric: took 20.231804569s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:41:48.512097   22579 node_conditions.go:102] verifying NodePressure condition ...
	I0528 20:41:48.677717   22579 request.go:629] Waited for 165.458182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes
	I0528 20:41:48.677788   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes
	I0528 20:41:48.677796   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:48.677806   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:48.677812   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:48.681329   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:48.682448   22579 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 20:41:48.682467   22579 node_conditions.go:123] node cpu capacity is 2
	I0528 20:41:48.682476   22579 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 20:41:48.682482   22579 node_conditions.go:123] node cpu capacity is 2
	I0528 20:41:48.682488   22579 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 20:41:48.682495   22579 node_conditions.go:123] node cpu capacity is 2
	I0528 20:41:48.682500   22579 node_conditions.go:105] duration metric: took 170.39831ms to run NodePressure ...
	I0528 20:41:48.682517   22579 start.go:240] waiting for startup goroutines ...
	I0528 20:41:48.682538   22579 start.go:254] writing updated cluster config ...
	I0528 20:41:48.682825   22579 ssh_runner.go:195] Run: rm -f paused
	I0528 20:41:48.732334   22579 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 20:41:48.734539   22579 out.go:177] * Done! kubectl is now configured to use "ha-908878" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.250644247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929116250622851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c27831ec-238c-4566-a098-c6434e2b6225 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.251259336Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3be4da78-f1c7-46c4-b7e1-639ab7595096 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.251327713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3be4da78-f1c7-46c4-b7e1-639ab7595096 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.251597915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716928912917476588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766590690596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6fe231fc7dbce25d01bf248b4e363bd410d67b785874c03d67b9895b05cd8d,PodSandboxId:0e94953284a5e4d09d285560204b96d126960c1c22367047d92a0697893879af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716928766576204824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766572067060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29
ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ea51bf984916bead29c664359f65ea790360f73bf15486bdf96c758189bd69,PodSandboxId:1f695b783edb95cab72476e5f23428dad45f722dd44cbb0bff30bab6aa207223,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1716928765126540501,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171692876
1367490977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20cf414ed60510910dc01761f0afc869180c224d4e26872a468b10e603e8e786,PodSandboxId:9d1408565bd5163dd277d755c852f8d09b92ff4f0ac886493b78b17bc70e95f6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17169287444
23802201,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10c65935005aeeb3bc67f128e502ec57,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716928741087996451,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716928740991369604,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aece72d9b21aa208ac4ce8512a57af63132d5882a8675650a980fa6b531af247,PodSandboxId:815ef28c8c10574c11bd2dce9a1acf1d7bfbf4859f7c59b844307688bca34a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716928741054454839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f926e075722f1f9cbd1b9f6559baa2071c1f539f729ddcb3572a87f64a8934e9,PodSandboxId:ce2508233e4b37815baef24981bbc12636f48bcc8015076d16dce0f2de38f726,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716928740948619502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3be4da78-f1c7-46c4-b7e1-639ab7595096 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.290224195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b63e7ac3-2340-4d3e-b8f7-5ba53fb6f175 name=/runtime.v1.RuntimeService/Version
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.290302300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b63e7ac3-2340-4d3e-b8f7-5ba53fb6f175 name=/runtime.v1.RuntimeService/Version
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.291165982Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a25e129-6807-494d-8b52-3de482619378 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.291664315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929116291641644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a25e129-6807-494d-8b52-3de482619378 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.292549325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92318f80-b054-414e-b1e3-f60ee08ec227 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.292625637Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92318f80-b054-414e-b1e3-f60ee08ec227 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.292857449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716928912917476588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766590690596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6fe231fc7dbce25d01bf248b4e363bd410d67b785874c03d67b9895b05cd8d,PodSandboxId:0e94953284a5e4d09d285560204b96d126960c1c22367047d92a0697893879af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716928766576204824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766572067060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29
ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ea51bf984916bead29c664359f65ea790360f73bf15486bdf96c758189bd69,PodSandboxId:1f695b783edb95cab72476e5f23428dad45f722dd44cbb0bff30bab6aa207223,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1716928765126540501,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171692876
1367490977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20cf414ed60510910dc01761f0afc869180c224d4e26872a468b10e603e8e786,PodSandboxId:9d1408565bd5163dd277d755c852f8d09b92ff4f0ac886493b78b17bc70e95f6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17169287444
23802201,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10c65935005aeeb3bc67f128e502ec57,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716928741087996451,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716928740991369604,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aece72d9b21aa208ac4ce8512a57af63132d5882a8675650a980fa6b531af247,PodSandboxId:815ef28c8c10574c11bd2dce9a1acf1d7bfbf4859f7c59b844307688bca34a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716928741054454839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f926e075722f1f9cbd1b9f6559baa2071c1f539f729ddcb3572a87f64a8934e9,PodSandboxId:ce2508233e4b37815baef24981bbc12636f48bcc8015076d16dce0f2de38f726,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716928740948619502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92318f80-b054-414e-b1e3-f60ee08ec227 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.340415710Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddc898b5-c156-4f1a-acaf-b1d251ee3689 name=/runtime.v1.RuntimeService/Version
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.340513457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddc898b5-c156-4f1a-acaf-b1d251ee3689 name=/runtime.v1.RuntimeService/Version
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.341986627Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a7b92e6-4e90-41df-b681-bf2112b71354 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.343307100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929116343271669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a7b92e6-4e90-41df-b681-bf2112b71354 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.345160057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1107d4c3-9241-4d8f-94f7-f3f26b5c70b1 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.345258018Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1107d4c3-9241-4d8f-94f7-f3f26b5c70b1 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.345727956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716928912917476588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766590690596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6fe231fc7dbce25d01bf248b4e363bd410d67b785874c03d67b9895b05cd8d,PodSandboxId:0e94953284a5e4d09d285560204b96d126960c1c22367047d92a0697893879af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716928766576204824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766572067060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29
ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ea51bf984916bead29c664359f65ea790360f73bf15486bdf96c758189bd69,PodSandboxId:1f695b783edb95cab72476e5f23428dad45f722dd44cbb0bff30bab6aa207223,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1716928765126540501,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171692876
1367490977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20cf414ed60510910dc01761f0afc869180c224d4e26872a468b10e603e8e786,PodSandboxId:9d1408565bd5163dd277d755c852f8d09b92ff4f0ac886493b78b17bc70e95f6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17169287444
23802201,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10c65935005aeeb3bc67f128e502ec57,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716928741087996451,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716928740991369604,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aece72d9b21aa208ac4ce8512a57af63132d5882a8675650a980fa6b531af247,PodSandboxId:815ef28c8c10574c11bd2dce9a1acf1d7bfbf4859f7c59b844307688bca34a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716928741054454839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f926e075722f1f9cbd1b9f6559baa2071c1f539f729ddcb3572a87f64a8934e9,PodSandboxId:ce2508233e4b37815baef24981bbc12636f48bcc8015076d16dce0f2de38f726,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716928740948619502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1107d4c3-9241-4d8f-94f7-f3f26b5c70b1 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.386969234Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb730d91-ce5f-4e32-9bd9-2b1eb540e44d name=/runtime.v1.RuntimeService/Version
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.387045753Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb730d91-ce5f-4e32-9bd9-2b1eb540e44d name=/runtime.v1.RuntimeService/Version
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.388326132Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0485e79d-e8b7-4707-ac59-fb0e28bbe279 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.388775655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929116388752339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0485e79d-e8b7-4707-ac59-fb0e28bbe279 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.389324235Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ae28b93-0736-45f1-ae3c-2c89a825fad9 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.389407272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ae28b93-0736-45f1-ae3c-2c89a825fad9 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:45:16 ha-908878 crio[681]: time="2024-05-28 20:45:16.389641485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716928912917476588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766590690596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6fe231fc7dbce25d01bf248b4e363bd410d67b785874c03d67b9895b05cd8d,PodSandboxId:0e94953284a5e4d09d285560204b96d126960c1c22367047d92a0697893879af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716928766576204824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766572067060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29
ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ea51bf984916bead29c664359f65ea790360f73bf15486bdf96c758189bd69,PodSandboxId:1f695b783edb95cab72476e5f23428dad45f722dd44cbb0bff30bab6aa207223,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1716928765126540501,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171692876
1367490977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20cf414ed60510910dc01761f0afc869180c224d4e26872a468b10e603e8e786,PodSandboxId:9d1408565bd5163dd277d755c852f8d09b92ff4f0ac886493b78b17bc70e95f6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17169287444
23802201,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10c65935005aeeb3bc67f128e502ec57,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716928741087996451,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716928740991369604,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aece72d9b21aa208ac4ce8512a57af63132d5882a8675650a980fa6b531af247,PodSandboxId:815ef28c8c10574c11bd2dce9a1acf1d7bfbf4859f7c59b844307688bca34a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716928741054454839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f926e075722f1f9cbd1b9f6559baa2071c1f539f729ddcb3572a87f64a8934e9,PodSandboxId:ce2508233e4b37815baef24981bbc12636f48bcc8015076d16dce0f2de38f726,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716928740948619502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ae28b93-0736-45f1-ae3c-2c89a825fad9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	92c83dd481e56       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   dfbac4c22bc27       busybox-fc5497c4f-ljbzs
	7c38e07fa546e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   fb8a83ba500b4       coredns-7db6d8ff4d-mvx67
	0b6fe231fc7db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   0e94953284a5e       storage-provisioner
	2470320e3bec5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   5333c6894c446       coredns-7db6d8ff4d-5fmns
	a7ea51bf98491       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    5 minutes ago       Running             kindnet-cni               0                   1f695b783edb9       kindnet-x4mzh
	97ba5f2725852       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      5 minutes ago       Running             kube-proxy                0                   2a5f076d2569c       kube-proxy-ng8mq
	20cf414ed6051       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   9d1408565bd51       kube-vip-ha-908878
	05d5882852e6e       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      6 minutes ago       Running             kube-scheduler            0                   54beb07b658e5       kube-scheduler-ha-908878
	aece72d9b21aa       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      6 minutes ago       Running             kube-controller-manager   0                   815ef28c8c105       kube-controller-manager-ha-908878
	650c6f374c3b3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   232d528c76896       etcd-ha-908878
	f926e075722f1       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      6 minutes ago       Running             kube-apiserver            0                   ce2508233e4b3       kube-apiserver-ha-908878
	
	
	==> coredns [2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9] <==
	[INFO] 10.244.1.2:56205 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000131203s
	[INFO] 10.244.1.2:38624 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000093706s
	[INFO] 10.244.2.2:58947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117263s
	[INFO] 10.244.2.2:42241 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004799735s
	[INFO] 10.244.2.2:34187 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000308919s
	[INFO] 10.244.2.2:41613 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002489251s
	[INFO] 10.244.2.2:55408 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147549s
	[INFO] 10.244.0.4:57170 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000374705s
	[INFO] 10.244.0.4:58966 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155963s
	[INFO] 10.244.0.4:35423 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111865s
	[INFO] 10.244.1.2:37835 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079714s
	[INFO] 10.244.1.2:45922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128914s
	[INFO] 10.244.2.2:49120 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102234s
	[INFO] 10.244.2.2:59817 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113316s
	[INFO] 10.244.1.2:33990 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104132s
	[INFO] 10.244.1.2:57343 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065665s
	[INFO] 10.244.1.2:37008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144249s
	[INFO] 10.244.2.2:57641 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201576s
	[INFO] 10.244.0.4:55430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016202s
	[INFO] 10.244.0.4:58197 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154574s
	[INFO] 10.244.0.4:43002 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159971s
	[INFO] 10.244.1.2:33008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159565s
	[INFO] 10.244.1.2:55799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106231s
	[INFO] 10.244.1.2:34935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119985s
	[INFO] 10.244.1.2:55524 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077247s
	
	
	==> coredns [7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6] <==
	[INFO] 10.244.2.2:34220 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181008s
	[INFO] 10.244.2.2:45561 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000220146s
	[INFO] 10.244.2.2:58602 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170027s
	[INFO] 10.244.0.4:43029 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001811296s
	[INFO] 10.244.0.4:49612 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098819s
	[INFO] 10.244.0.4:33728 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000042492s
	[INFO] 10.244.0.4:34284 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001158314s
	[INFO] 10.244.0.4:52540 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045508s
	[INFO] 10.244.1.2:36534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139592s
	[INFO] 10.244.1.2:55059 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00181265s
	[INFO] 10.244.1.2:57133 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001147785s
	[INFO] 10.244.1.2:59156 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008284s
	[INFO] 10.244.1.2:56011 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189969s
	[INFO] 10.244.1.2:57157 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076075s
	[INFO] 10.244.2.2:38176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112538s
	[INFO] 10.244.2.2:54457 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111343s
	[INFO] 10.244.0.4:46728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104994s
	[INFO] 10.244.0.4:49514 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077463s
	[INFO] 10.244.0.4:40805 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103396s
	[INFO] 10.244.0.4:41445 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093035s
	[INFO] 10.244.1.2:48615 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169745s
	[INFO] 10.244.2.2:39740 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00022698s
	[INFO] 10.244.2.2:42139 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182159s
	[INFO] 10.244.2.2:54665 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00035602s
	[INFO] 10.244.0.4:33063 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104255s
	
	
	==> describe nodes <==
	Name:               ha-908878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T20_39_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:45:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:42:10 +0000   Tue, 28 May 2024 20:39:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:42:10 +0000   Tue, 28 May 2024 20:39:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:42:10 +0000   Tue, 28 May 2024 20:39:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:42:10 +0000   Tue, 28 May 2024 20:39:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-908878
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a470f4bebd094a03b2a08db3a205d097
	  System UUID:                a470f4be-bd09-4a03-b2a0-8db3a205d097
	  Boot ID:                    e5dc2485-8c44-4c4f-899c-7eb02750525b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ljbzs              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 coredns-7db6d8ff4d-5fmns             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m56s
	  kube-system                 coredns-7db6d8ff4d-mvx67             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m56s
	  kube-system                 etcd-ha-908878                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m9s
	  kube-system                 kindnet-x4mzh                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m57s
	  kube-system                 kube-apiserver-ha-908878             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-controller-manager-ha-908878    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-proxy-ng8mq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-scheduler-ha-908878             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-vip-ha-908878                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m55s  kube-proxy       
	  Normal  Starting                 6m9s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m9s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m9s   kubelet          Node ha-908878 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s   kubelet          Node ha-908878 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s   kubelet          Node ha-908878 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m57s  node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Normal  NodeReady                5m51s  kubelet          Node ha-908878 status is now: NodeReady
	  Normal  RegisteredNode           4m49s  node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Normal  RegisteredNode           3m34s  node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	
	
	Name:               ha-908878-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T20_40_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:40:09 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:42:53 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 28 May 2024 20:42:11 +0000   Tue, 28 May 2024 20:43:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 28 May 2024 20:42:11 +0000   Tue, 28 May 2024 20:43:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 28 May 2024 20:42:11 +0000   Tue, 28 May 2024 20:43:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 28 May 2024 20:42:11 +0000   Tue, 28 May 2024 20:43:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    ha-908878-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f91cea3af174de9a05db650e4662bbb
	  System UUID:                8f91cea3-af17-4de9-a05d-b650e4662bbb
	  Boot ID:                    b2eef028-5a7a-487d-9126-300ce051c010
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rfl74                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 etcd-ha-908878-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m5s
	  kube-system                 kindnet-6prxw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m7s
	  kube-system                 kube-apiserver-ha-908878-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-controller-manager-ha-908878-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-proxy-pg89k                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-scheduler-ha-908878-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-vip-ha-908878-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m7s (x8 over 5m7s)  kubelet          Node ha-908878-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s (x8 over 5m7s)  kubelet          Node ha-908878-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s (x7 over 5m7s)  kubelet          Node ha-908878-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  RegisteredNode           4m49s                node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  RegisteredNode           3m34s                node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  NodeNotReady             102s                 node-controller  Node ha-908878-m02 status is now: NodeNotReady
	
	
	Name:               ha-908878-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T20_41_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:41:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:45:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:41:55 +0000   Tue, 28 May 2024 20:41:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:41:55 +0000   Tue, 28 May 2024 20:41:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:41:55 +0000   Tue, 28 May 2024 20:41:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:41:55 +0000   Tue, 28 May 2024 20:41:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    ha-908878-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2e3e9f9367694cccab6cb31074c7abc1
	  System UUID:                2e3e9f93-6769-4ccc-ab6c-b31074c7abc1
	  Boot ID:                    db2680cb-6e23-43c2-b2b5-a7f2a2d62f5c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ldbfj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 etcd-ha-908878-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m50s
	  kube-system                 kindnet-fx2nj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m52s
	  kube-system                 kube-apiserver-ha-908878-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-controller-manager-ha-908878-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-proxy-4vjp6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-scheduler-ha-908878-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-vip-ha-908878-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m47s                  kube-proxy       
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-908878-m03 event: Registered Node ha-908878-m03 in Controller
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node ha-908878-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node ha-908878-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x7 over 3m52s)  kubelet          Node ha-908878-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-908878-m03 event: Registered Node ha-908878-m03 in Controller
	  Normal  RegisteredNode           3m34s                  node-controller  Node ha-908878-m03 event: Registered Node ha-908878-m03 in Controller
	
	
	Name:               ha-908878-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T20_42_26_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:42:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:45:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:42:56 +0000   Tue, 28 May 2024 20:42:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:42:56 +0000   Tue, 28 May 2024 20:42:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:42:56 +0000   Tue, 28 May 2024 20:42:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:42:56 +0000   Tue, 28 May 2024 20:42:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    ha-908878-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3c86941732a4e078803ce72d6cca1eb
	  System UUID:                f3c86941-732a-4e07-8803-ce72d6cca1eb
	  Boot ID:                    3305d0dc-4089-4a56-838a-9e99a8e74f80
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-68kxq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m51s
	  kube-system                 kube-proxy-bnh2w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m51s (x2 over 2m51s)  kubelet          Node ha-908878-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m51s (x2 over 2m51s)  kubelet          Node ha-908878-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s (x2 over 2m51s)  kubelet          Node ha-908878-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-908878-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May28 20:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050785] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040005] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.504289] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.190103] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.581513] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.578430] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.054216] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052934] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.180850] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.119729] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.261744] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.070195] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +5.007183] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[  +0.062643] kauditd_printk_skb: 158 callbacks suppressed
	[May28 20:39] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.085155] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.532403] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.860818] kauditd_printk_skb: 38 callbacks suppressed
	[May28 20:40] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14] <==
	{"level":"warn","ts":"2024-05-28T20:45:16.602589Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.665197Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.239:2380/version","remote-member-id":"a38e88b5839dc078","error":"Get \"https://192.168.39.239:2380/version\": dial tcp 192.168.39.239:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-28T20:45:16.665249Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"a38e88b5839dc078","error":"Get \"https://192.168.39.239:2380/version\": dial tcp 192.168.39.239:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-28T20:45:16.669049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.677182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.682263Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.694452Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.704932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.706006Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.712791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.716468Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.719774Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.733833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.739511Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.742985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.745862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.755293Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.766718Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.775045Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.779089Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.78298Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.790408Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.799201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.803051Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:45:16.805443Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:45:16 up 6 min,  0 users,  load average: 0.22, 0.27, 0.13
	Linux ha-908878 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a7ea51bf984916bead29c664359f65ea790360f73bf15486bdf96c758189bd69] <==
	I0528 20:44:46.190612       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	I0528 20:44:56.196826       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0528 20:44:56.196960       1 main.go:227] handling current node
	I0528 20:44:56.197031       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0528 20:44:56.197055       1 main.go:250] Node ha-908878-m02 has CIDR [10.244.1.0/24] 
	I0528 20:44:56.197178       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0528 20:44:56.197199       1 main.go:250] Node ha-908878-m03 has CIDR [10.244.2.0/24] 
	I0528 20:44:56.197261       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0528 20:44:56.197278       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	I0528 20:45:06.208238       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0528 20:45:06.208305       1 main.go:227] handling current node
	I0528 20:45:06.208329       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0528 20:45:06.208345       1 main.go:250] Node ha-908878-m02 has CIDR [10.244.1.0/24] 
	I0528 20:45:06.208662       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0528 20:45:06.208699       1 main.go:250] Node ha-908878-m03 has CIDR [10.244.2.0/24] 
	I0528 20:45:06.208767       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0528 20:45:06.208785       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	I0528 20:45:16.215146       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0528 20:45:16.215165       1 main.go:227] handling current node
	I0528 20:45:16.215176       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0528 20:45:16.215180       1 main.go:250] Node ha-908878-m02 has CIDR [10.244.1.0/24] 
	I0528 20:45:16.215273       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0528 20:45:16.215277       1 main.go:250] Node ha-908878-m03 has CIDR [10.244.2.0/24] 
	I0528 20:45:16.215321       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0528 20:45:16.215325       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f926e075722f1f9cbd1b9f6559baa2071c1f539f729ddcb3572a87f64a8934e9] <==
	I0528 20:39:07.236082       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0528 20:39:07.266129       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0528 20:39:07.285274       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0528 20:39:19.318316       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0528 20:39:20.069050       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0528 20:40:10.266455       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0528 20:40:10.266701       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0528 20:40:10.266746       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.839µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0528 20:40:10.268007       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0528 20:40:10.268121       1 timeout.go:142] post-timeout activity - time-elapsed: 1.756296ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0528 20:41:54.413583       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55066: use of closed network connection
	E0528 20:41:54.619077       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55080: use of closed network connection
	E0528 20:41:54.803730       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55088: use of closed network connection
	E0528 20:41:55.011752       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55114: use of closed network connection
	E0528 20:41:55.211373       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55138: use of closed network connection
	E0528 20:41:55.402622       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55162: use of closed network connection
	E0528 20:41:55.576393       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55190: use of closed network connection
	E0528 20:41:55.790287       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55218: use of closed network connection
	E0528 20:41:55.969182       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55238: use of closed network connection
	E0528 20:41:56.277128       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55268: use of closed network connection
	E0528 20:41:56.445213       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55290: use of closed network connection
	E0528 20:41:56.630184       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55314: use of closed network connection
	E0528 20:41:56.817823       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55326: use of closed network connection
	E0528 20:41:56.990599       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55348: use of closed network connection
	E0528 20:41:57.171180       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55372: use of closed network connection
	
	
	==> kube-controller-manager [aece72d9b21aa208ac4ce8512a57af63132d5882a8675650a980fa6b531af247] <==
	I0528 20:41:24.352789       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-908878-m03"
	I0528 20:41:49.620415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.896976ms"
	I0528 20:41:49.662578       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.071865ms"
	I0528 20:41:49.665186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.545967ms"
	I0528 20:41:49.665448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.789µs"
	I0528 20:41:49.788653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.673974ms"
	I0528 20:41:49.959149       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="169.158521ms"
	I0528 20:41:50.024107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.881844ms"
	I0528 20:41:50.068496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.340615ms"
	I0528 20:41:50.068724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.248µs"
	I0528 20:41:53.410659       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.275506ms"
	I0528 20:41:53.410947       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.061µs"
	I0528 20:41:53.838999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.775079ms"
	I0528 20:41:53.839177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.576µs"
	I0528 20:41:53.960484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.291459ms"
	I0528 20:41:53.960696       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.482µs"
	E0528 20:42:25.215787       1 certificate_controller.go:146] Sync csr-wnzwn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-wnzwn": the object has been modified; please apply your changes to the latest version and try again
	E0528 20:42:25.235546       1 certificate_controller.go:146] Sync csr-wnzwn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-wnzwn": the object has been modified; please apply your changes to the latest version and try again
	I0528 20:42:25.529760       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-908878-m04\" does not exist"
	I0528 20:42:25.556939       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-908878-m04" podCIDRs=["10.244.3.0/24"]
	I0528 20:42:29.382657       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-908878-m04"
	I0528 20:42:35.936469       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-908878-m04"
	I0528 20:43:34.405211       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-908878-m04"
	I0528 20:43:34.458744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.320257ms"
	I0528 20:43:34.459114       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.898µs"
	
	
	==> kube-proxy [97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe] <==
	I0528 20:39:21.545470       1 server_linux.go:69] "Using iptables proxy"
	I0528 20:39:21.569641       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0528 20:39:21.631409       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 20:39:21.631495       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 20:39:21.631512       1 server_linux.go:165] "Using iptables Proxier"
	I0528 20:39:21.634617       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 20:39:21.635082       1 server.go:872] "Version info" version="v1.30.1"
	I0528 20:39:21.635116       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:39:21.636675       1 config.go:192] "Starting service config controller"
	I0528 20:39:21.636707       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 20:39:21.636737       1 config.go:101] "Starting endpoint slice config controller"
	I0528 20:39:21.636758       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 20:39:21.637418       1 config.go:319] "Starting node config controller"
	I0528 20:39:21.637446       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 20:39:21.737927       1 shared_informer.go:320] Caches are synced for node config
	I0528 20:39:21.737972       1 shared_informer.go:320] Caches are synced for service config
	I0528 20:39:21.738008       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9] <==
	W0528 20:39:05.084846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 20:39:05.084955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0528 20:39:05.100133       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 20:39:05.100220       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0528 20:39:05.182255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 20:39:05.182542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 20:39:05.219676       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 20:39:05.219802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 20:39:05.336519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0528 20:39:05.336613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0528 20:39:05.349682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 20:39:05.350132       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 20:39:05.355219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0528 20:39:05.355300       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0528 20:39:05.750699       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0528 20:41:49.620623       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-ldbfj\": pod busybox-fc5497c4f-ldbfj is already assigned to node \"ha-908878-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-ldbfj" node="ha-908878-m03"
	E0528 20:41:49.621210       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 28610a08-d992-429e-8480-d957b325ccbd(default/busybox-fc5497c4f-ldbfj) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-ldbfj"
	E0528 20:41:49.621549       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-ldbfj\": pod busybox-fc5497c4f-ldbfj is already assigned to node \"ha-908878-m03\"" pod="default/busybox-fc5497c4f-ldbfj"
	I0528 20:41:49.621644       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-ldbfj" node="ha-908878-m03"
	E0528 20:41:49.620767       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-ljbzs\": pod busybox-fc5497c4f-ljbzs is already assigned to node \"ha-908878\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-ljbzs" node="ha-908878"
	E0528 20:41:49.628536       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3a49d7b7-d8ae-44a8-8393-51781cf73591(default/busybox-fc5497c4f-ljbzs) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-ljbzs"
	E0528 20:41:49.628562       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-ljbzs\": pod busybox-fc5497c4f-ljbzs is already assigned to node \"ha-908878\"" pod="default/busybox-fc5497c4f-ljbzs"
	I0528 20:41:49.628589       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-ljbzs" node="ha-908878"
	E0528 20:42:25.645571       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-68kxq\": pod kindnet-68kxq is already assigned to node \"ha-908878-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-68kxq" node="ha-908878-m04"
	E0528 20:42:25.646180       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-68kxq\": pod kindnet-68kxq is already assigned to node \"ha-908878-m04\"" pod="kube-system/kindnet-68kxq"
	
	
	==> kubelet <==
	May 28 20:41:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:41:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:41:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:41:49 ha-908878 kubelet[1380]: I0528 20:41:49.623755    1380 topology_manager.go:215] "Topology Admit Handler" podUID="3a49d7b7-d8ae-44a8-8393-51781cf73591" podNamespace="default" podName="busybox-fc5497c4f-ljbzs"
	May 28 20:41:49 ha-908878 kubelet[1380]: I0528 20:41:49.732126    1380 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t62p\" (UniqueName: \"kubernetes.io/projected/3a49d7b7-d8ae-44a8-8393-51781cf73591-kube-api-access-5t62p\") pod \"busybox-fc5497c4f-ljbzs\" (UID: \"3a49d7b7-d8ae-44a8-8393-51781cf73591\") " pod="default/busybox-fc5497c4f-ljbzs"
	May 28 20:42:07 ha-908878 kubelet[1380]: E0528 20:42:07.192347    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:42:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:42:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:42:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:42:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:43:07 ha-908878 kubelet[1380]: E0528 20:43:07.191954    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:43:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:43:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:43:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:43:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:44:07 ha-908878 kubelet[1380]: E0528 20:44:07.196222    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:44:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:44:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:44:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:44:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:45:07 ha-908878 kubelet[1380]: E0528 20:45:07.189196    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:45:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:45:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:45:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:45:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-908878 -n ha-908878
helpers_test.go:261: (dbg) Run:  kubectl --context ha-908878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (60.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr: exit status 3 (3.20419839s)

                                                
                                                
-- stdout --
	ha-908878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-908878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:45:21.403361   27357 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:45:21.403584   27357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:45:21.403592   27357 out.go:304] Setting ErrFile to fd 2...
	I0528 20:45:21.403596   27357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:45:21.403794   27357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:45:21.403941   27357 out.go:298] Setting JSON to false
	I0528 20:45:21.403962   27357 mustload.go:65] Loading cluster: ha-908878
	I0528 20:45:21.404077   27357 notify.go:220] Checking for updates...
	I0528 20:45:21.404295   27357 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:45:21.404308   27357 status.go:255] checking status of ha-908878 ...
	I0528 20:45:21.404647   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:21.404712   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:21.420006   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42781
	I0528 20:45:21.420433   27357 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:21.421036   27357 main.go:141] libmachine: Using API Version  1
	I0528 20:45:21.421055   27357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:21.421477   27357 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:21.421683   27357 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:45:21.423195   27357 status.go:330] ha-908878 host status = "Running" (err=<nil>)
	I0528 20:45:21.423212   27357 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:45:21.423513   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:21.423545   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:21.438509   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0528 20:45:21.438828   27357 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:21.439248   27357 main.go:141] libmachine: Using API Version  1
	I0528 20:45:21.439271   27357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:21.439545   27357 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:21.439747   27357 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:45:21.442170   27357 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:21.442583   27357 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:45:21.442614   27357 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:21.442720   27357 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:45:21.443002   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:21.443064   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:21.458432   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45589
	I0528 20:45:21.458894   27357 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:21.459326   27357 main.go:141] libmachine: Using API Version  1
	I0528 20:45:21.459345   27357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:21.459642   27357 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:21.459827   27357 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:45:21.460036   27357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:21.460055   27357 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:45:21.462747   27357 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:21.463118   27357 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:45:21.463143   27357 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:21.463322   27357 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:45:21.463490   27357 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:45:21.463624   27357 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:45:21.463741   27357 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:45:21.545573   27357 ssh_runner.go:195] Run: systemctl --version
	I0528 20:45:21.551506   27357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:21.566673   27357 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:45:21.566699   27357 api_server.go:166] Checking apiserver status ...
	I0528 20:45:21.566738   27357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:45:21.581741   27357 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup
	W0528 20:45:21.598685   27357 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:45:21.598731   27357 ssh_runner.go:195] Run: ls
	I0528 20:45:21.603395   27357 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:45:21.609229   27357 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:45:21.609259   27357 status.go:422] ha-908878 apiserver status = Running (err=<nil>)
	I0528 20:45:21.609271   27357 status.go:257] ha-908878 status: &{Name:ha-908878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:45:21.609295   27357 status.go:255] checking status of ha-908878-m02 ...
	I0528 20:45:21.609679   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:21.609723   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:21.625006   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33709
	I0528 20:45:21.625351   27357 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:21.625874   27357 main.go:141] libmachine: Using API Version  1
	I0528 20:45:21.625902   27357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:21.626256   27357 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:21.626439   27357 main.go:141] libmachine: (ha-908878-m02) Calling .GetState
	I0528 20:45:21.628087   27357 status.go:330] ha-908878-m02 host status = "Running" (err=<nil>)
	I0528 20:45:21.628104   27357 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:45:21.628369   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:21.628423   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:21.645733   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38367
	I0528 20:45:21.646128   27357 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:21.646516   27357 main.go:141] libmachine: Using API Version  1
	I0528 20:45:21.646534   27357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:21.646832   27357 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:21.647005   27357 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:45:21.649867   27357 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:21.650321   27357 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:45:21.650349   27357 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:21.650445   27357 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:45:21.650744   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:21.650784   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:21.665960   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39449
	I0528 20:45:21.666357   27357 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:21.666863   27357 main.go:141] libmachine: Using API Version  1
	I0528 20:45:21.666886   27357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:21.667256   27357 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:21.667457   27357 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:45:21.667648   27357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:21.667665   27357 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:45:21.670289   27357 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:21.670670   27357 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:45:21.670698   27357 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:21.670788   27357 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:45:21.670976   27357 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:45:21.671154   27357 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:45:21.671297   27357 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	W0528 20:45:24.218076   27357 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.239:22: connect: no route to host
	W0528 20:45:24.218181   27357 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	E0528 20:45:24.218212   27357 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:24.218223   27357 status.go:257] ha-908878-m02 status: &{Name:ha-908878-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0528 20:45:24.218245   27357 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:24.218260   27357 status.go:255] checking status of ha-908878-m03 ...
	I0528 20:45:24.218640   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:24.218680   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:24.235289   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46059
	I0528 20:45:24.235640   27357 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:24.236115   27357 main.go:141] libmachine: Using API Version  1
	I0528 20:45:24.236136   27357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:24.236429   27357 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:24.236625   27357 main.go:141] libmachine: (ha-908878-m03) Calling .GetState
	I0528 20:45:24.238286   27357 status.go:330] ha-908878-m03 host status = "Running" (err=<nil>)
	I0528 20:45:24.238302   27357 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:24.238580   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:24.238611   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:24.255352   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33109
	I0528 20:45:24.255750   27357 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:24.256183   27357 main.go:141] libmachine: Using API Version  1
	I0528 20:45:24.256212   27357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:24.256509   27357 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:24.256689   27357 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:45:24.259507   27357 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:24.259941   27357 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:24.259966   27357 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:24.260119   27357 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:24.260472   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:24.260507   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:24.274300   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I0528 20:45:24.274745   27357 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:24.275205   27357 main.go:141] libmachine: Using API Version  1
	I0528 20:45:24.275228   27357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:24.275503   27357 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:24.275712   27357 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:45:24.275897   27357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:24.275922   27357 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:45:24.278745   27357 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:24.279168   27357 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:24.279188   27357 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:24.279368   27357 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:45:24.279536   27357 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:45:24.279670   27357 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:45:24.279783   27357 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:45:24.365038   27357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:24.379928   27357 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:45:24.379963   27357 api_server.go:166] Checking apiserver status ...
	I0528 20:45:24.380004   27357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:45:24.392733   27357 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup
	W0528 20:45:24.403008   27357 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:45:24.403059   27357 ssh_runner.go:195] Run: ls
	I0528 20:45:24.408270   27357 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:45:24.413280   27357 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:45:24.413298   27357 status.go:422] ha-908878-m03 apiserver status = Running (err=<nil>)
	I0528 20:45:24.413305   27357 status.go:257] ha-908878-m03 status: &{Name:ha-908878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:45:24.413322   27357 status.go:255] checking status of ha-908878-m04 ...
	I0528 20:45:24.413616   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:24.413652   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:24.429100   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0528 20:45:24.429482   27357 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:24.429915   27357 main.go:141] libmachine: Using API Version  1
	I0528 20:45:24.429932   27357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:24.430222   27357 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:24.430441   27357 main.go:141] libmachine: (ha-908878-m04) Calling .GetState
	I0528 20:45:24.432044   27357 status.go:330] ha-908878-m04 host status = "Running" (err=<nil>)
	I0528 20:45:24.432060   27357 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:24.432329   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:24.432367   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:24.446487   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0528 20:45:24.446913   27357 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:24.447284   27357 main.go:141] libmachine: Using API Version  1
	I0528 20:45:24.447301   27357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:24.447612   27357 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:24.447743   27357 main.go:141] libmachine: (ha-908878-m04) Calling .GetIP
	I0528 20:45:24.450326   27357 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:24.450691   27357 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:24.450705   27357 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:24.450837   27357 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:24.451099   27357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:24.451130   27357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:24.464846   27357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0528 20:45:24.465187   27357 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:24.465632   27357 main.go:141] libmachine: Using API Version  1
	I0528 20:45:24.465662   27357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:24.465990   27357 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:24.466180   27357 main.go:141] libmachine: (ha-908878-m04) Calling .DriverName
	I0528 20:45:24.466353   27357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:24.466371   27357 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHHostname
	I0528 20:45:24.469025   27357 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:24.469407   27357 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:24.469445   27357 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:24.469592   27357 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHPort
	I0528 20:45:24.469740   27357 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHKeyPath
	I0528 20:45:24.469879   27357 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHUsername
	I0528 20:45:24.469998   27357 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m04/id_rsa Username:docker}
	I0528 20:45:24.553031   27357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:24.566587   27357 status.go:257] ha-908878-m04 status: &{Name:ha-908878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr: exit status 3 (2.405733319s)

                                                
                                                
-- stdout --
	ha-908878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-908878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:45:25.277446   27440 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:45:25.277557   27440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:45:25.277565   27440 out.go:304] Setting ErrFile to fd 2...
	I0528 20:45:25.277569   27440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:45:25.277717   27440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:45:25.277916   27440 out.go:298] Setting JSON to false
	I0528 20:45:25.277946   27440 mustload.go:65] Loading cluster: ha-908878
	I0528 20:45:25.277977   27440 notify.go:220] Checking for updates...
	I0528 20:45:25.278360   27440 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:45:25.278380   27440 status.go:255] checking status of ha-908878 ...
	I0528 20:45:25.279117   27440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:25.279164   27440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:25.299093   27440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0528 20:45:25.299546   27440 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:25.300171   27440 main.go:141] libmachine: Using API Version  1
	I0528 20:45:25.300196   27440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:25.300654   27440 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:25.300846   27440 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:45:25.302506   27440 status.go:330] ha-908878 host status = "Running" (err=<nil>)
	I0528 20:45:25.302522   27440 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:45:25.302918   27440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:25.302980   27440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:25.317354   27440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I0528 20:45:25.317679   27440 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:25.318118   27440 main.go:141] libmachine: Using API Version  1
	I0528 20:45:25.318137   27440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:25.318519   27440 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:25.318698   27440 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:45:25.321378   27440 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:25.321735   27440 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:45:25.321776   27440 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:25.321869   27440 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:45:25.322145   27440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:25.322186   27440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:25.337258   27440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36507
	I0528 20:45:25.337590   27440 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:25.338037   27440 main.go:141] libmachine: Using API Version  1
	I0528 20:45:25.338055   27440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:25.338318   27440 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:25.338486   27440 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:45:25.338642   27440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:25.338685   27440 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:45:25.341095   27440 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:25.341474   27440 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:45:25.341497   27440 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:25.341541   27440 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:45:25.341706   27440 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:45:25.341941   27440 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:45:25.342094   27440 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:45:25.427392   27440 ssh_runner.go:195] Run: systemctl --version
	I0528 20:45:25.433409   27440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:25.448914   27440 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:45:25.448939   27440 api_server.go:166] Checking apiserver status ...
	I0528 20:45:25.448975   27440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:45:25.462316   27440 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup
	W0528 20:45:25.471397   27440 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:45:25.471447   27440 ssh_runner.go:195] Run: ls
	I0528 20:45:25.475632   27440 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:45:25.481703   27440 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:45:25.481728   27440 status.go:422] ha-908878 apiserver status = Running (err=<nil>)
	I0528 20:45:25.481740   27440 status.go:257] ha-908878 status: &{Name:ha-908878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:45:25.481784   27440 status.go:255] checking status of ha-908878-m02 ...
	I0528 20:45:25.482165   27440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:25.482200   27440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:25.496358   27440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I0528 20:45:25.496744   27440 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:25.497148   27440 main.go:141] libmachine: Using API Version  1
	I0528 20:45:25.497166   27440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:25.497495   27440 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:25.497685   27440 main.go:141] libmachine: (ha-908878-m02) Calling .GetState
	I0528 20:45:25.499160   27440 status.go:330] ha-908878-m02 host status = "Running" (err=<nil>)
	I0528 20:45:25.499177   27440 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:45:25.499538   27440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:25.499565   27440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:25.513951   27440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I0528 20:45:25.514277   27440 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:25.514728   27440 main.go:141] libmachine: Using API Version  1
	I0528 20:45:25.514755   27440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:25.515033   27440 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:25.515232   27440 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:45:25.517880   27440 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:25.518235   27440 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:45:25.518278   27440 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:25.518425   27440 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:45:25.518799   27440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:25.518824   27440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:25.532490   27440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35941
	I0528 20:45:25.532838   27440 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:25.533283   27440 main.go:141] libmachine: Using API Version  1
	I0528 20:45:25.533302   27440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:25.533564   27440 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:25.533795   27440 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:45:25.533955   27440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:25.533973   27440 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:45:25.536167   27440 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:25.536507   27440 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:45:25.536534   27440 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:25.536683   27440 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:45:25.536842   27440 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:45:25.536970   27440 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:45:25.537117   27440 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	W0528 20:45:27.290088   27440 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.239:22: connect: no route to host
	W0528 20:45:27.290181   27440 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	E0528 20:45:27.290198   27440 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:27.290205   27440 status.go:257] ha-908878-m02 status: &{Name:ha-908878-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0528 20:45:27.290237   27440 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:27.290250   27440 status.go:255] checking status of ha-908878-m03 ...
	I0528 20:45:27.290567   27440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:27.290613   27440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:27.304970   27440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45551
	I0528 20:45:27.305397   27440 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:27.305903   27440 main.go:141] libmachine: Using API Version  1
	I0528 20:45:27.305929   27440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:27.306241   27440 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:27.306426   27440 main.go:141] libmachine: (ha-908878-m03) Calling .GetState
	I0528 20:45:27.307907   27440 status.go:330] ha-908878-m03 host status = "Running" (err=<nil>)
	I0528 20:45:27.307924   27440 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:27.308224   27440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:27.308264   27440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:27.322871   27440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37205
	I0528 20:45:27.323309   27440 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:27.323764   27440 main.go:141] libmachine: Using API Version  1
	I0528 20:45:27.323785   27440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:27.324119   27440 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:27.324335   27440 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:45:27.327322   27440 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:27.327794   27440 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:27.327817   27440 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:27.328017   27440 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:27.328436   27440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:27.328486   27440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:27.345074   27440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37153
	I0528 20:45:27.345505   27440 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:27.346092   27440 main.go:141] libmachine: Using API Version  1
	I0528 20:45:27.346117   27440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:27.346435   27440 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:27.346626   27440 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:45:27.346797   27440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:27.346830   27440 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:45:27.349539   27440 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:27.349916   27440 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:27.349934   27440 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:27.350056   27440 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:45:27.350199   27440 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:45:27.350352   27440 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:45:27.350487   27440 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:45:27.437668   27440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:27.453369   27440 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:45:27.453395   27440 api_server.go:166] Checking apiserver status ...
	I0528 20:45:27.453427   27440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:45:27.467621   27440 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup
	W0528 20:45:27.477006   27440 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:45:27.477052   27440 ssh_runner.go:195] Run: ls
	I0528 20:45:27.481311   27440 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:45:27.485489   27440 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:45:27.485509   27440 status.go:422] ha-908878-m03 apiserver status = Running (err=<nil>)
	I0528 20:45:27.485519   27440 status.go:257] ha-908878-m03 status: &{Name:ha-908878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:45:27.485537   27440 status.go:255] checking status of ha-908878-m04 ...
	I0528 20:45:27.485838   27440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:27.485864   27440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:27.500198   27440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32805
	I0528 20:45:27.500601   27440 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:27.501043   27440 main.go:141] libmachine: Using API Version  1
	I0528 20:45:27.501062   27440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:27.501349   27440 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:27.501513   27440 main.go:141] libmachine: (ha-908878-m04) Calling .GetState
	I0528 20:45:27.502888   27440 status.go:330] ha-908878-m04 host status = "Running" (err=<nil>)
	I0528 20:45:27.502901   27440 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:27.503172   27440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:27.503192   27440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:27.517478   27440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
	I0528 20:45:27.517806   27440 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:27.518235   27440 main.go:141] libmachine: Using API Version  1
	I0528 20:45:27.518252   27440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:27.518530   27440 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:27.518689   27440 main.go:141] libmachine: (ha-908878-m04) Calling .GetIP
	I0528 20:45:27.521082   27440 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:27.521447   27440 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:27.521475   27440 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:27.521567   27440 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:27.521912   27440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:27.521950   27440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:27.536703   27440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46173
	I0528 20:45:27.537043   27440 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:27.537440   27440 main.go:141] libmachine: Using API Version  1
	I0528 20:45:27.537461   27440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:27.537784   27440 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:27.537954   27440 main.go:141] libmachine: (ha-908878-m04) Calling .DriverName
	I0528 20:45:27.538105   27440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:27.538129   27440 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHHostname
	I0528 20:45:27.540874   27440 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:27.541240   27440 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:27.541266   27440 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:27.541399   27440 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHPort
	I0528 20:45:27.541577   27440 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHKeyPath
	I0528 20:45:27.541712   27440 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHUsername
	I0528 20:45:27.541853   27440 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m04/id_rsa Username:docker}
	I0528 20:45:27.629049   27440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:27.643037   27440 status.go:257] ha-908878-m04 status: &{Name:ha-908878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr: exit status 3 (4.783023426s)

                                                
                                                
-- stdout --
	ha-908878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-908878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:45:29.041422   27541 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:45:29.041663   27541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:45:29.041673   27541 out.go:304] Setting ErrFile to fd 2...
	I0528 20:45:29.041678   27541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:45:29.041908   27541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:45:29.042059   27541 out.go:298] Setting JSON to false
	I0528 20:45:29.042085   27541 mustload.go:65] Loading cluster: ha-908878
	I0528 20:45:29.042205   27541 notify.go:220] Checking for updates...
	I0528 20:45:29.042451   27541 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:45:29.042464   27541 status.go:255] checking status of ha-908878 ...
	I0528 20:45:29.042833   27541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:29.042904   27541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:29.063149   27541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40405
	I0528 20:45:29.063520   27541 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:29.064142   27541 main.go:141] libmachine: Using API Version  1
	I0528 20:45:29.064177   27541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:29.064503   27541 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:29.064672   27541 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:45:29.066078   27541 status.go:330] ha-908878 host status = "Running" (err=<nil>)
	I0528 20:45:29.066093   27541 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:45:29.066398   27541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:29.066440   27541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:29.080755   27541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40305
	I0528 20:45:29.081080   27541 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:29.081480   27541 main.go:141] libmachine: Using API Version  1
	I0528 20:45:29.081499   27541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:29.081755   27541 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:29.081951   27541 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:45:29.084417   27541 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:29.084896   27541 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:45:29.084915   27541 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:29.085104   27541 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:45:29.085452   27541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:29.085505   27541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:29.099423   27541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0528 20:45:29.099705   27541 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:29.100086   27541 main.go:141] libmachine: Using API Version  1
	I0528 20:45:29.100104   27541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:29.100420   27541 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:29.100578   27541 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:45:29.100762   27541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:29.100794   27541 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:45:29.103062   27541 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:29.103442   27541 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:45:29.103464   27541 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:29.103594   27541 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:45:29.103747   27541 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:45:29.103883   27541 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:45:29.104037   27541 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:45:29.187680   27541 ssh_runner.go:195] Run: systemctl --version
	I0528 20:45:29.193599   27541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:29.208536   27541 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:45:29.208568   27541 api_server.go:166] Checking apiserver status ...
	I0528 20:45:29.208604   27541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:45:29.222715   27541 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup
	W0528 20:45:29.232866   27541 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:45:29.232924   27541 ssh_runner.go:195] Run: ls
	I0528 20:45:29.237985   27541 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:45:29.243987   27541 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:45:29.244007   27541 status.go:422] ha-908878 apiserver status = Running (err=<nil>)
	I0528 20:45:29.244017   27541 status.go:257] ha-908878 status: &{Name:ha-908878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:45:29.244036   27541 status.go:255] checking status of ha-908878-m02 ...
	I0528 20:45:29.244383   27541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:29.244422   27541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:29.259227   27541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45173
	I0528 20:45:29.259645   27541 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:29.260208   27541 main.go:141] libmachine: Using API Version  1
	I0528 20:45:29.260233   27541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:29.260554   27541 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:29.260740   27541 main.go:141] libmachine: (ha-908878-m02) Calling .GetState
	I0528 20:45:29.262291   27541 status.go:330] ha-908878-m02 host status = "Running" (err=<nil>)
	I0528 20:45:29.262307   27541 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:45:29.262668   27541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:29.262708   27541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:29.276732   27541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45145
	I0528 20:45:29.277159   27541 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:29.277663   27541 main.go:141] libmachine: Using API Version  1
	I0528 20:45:29.277683   27541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:29.278058   27541 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:29.278229   27541 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:45:29.280834   27541 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:29.281208   27541 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:45:29.281231   27541 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:29.281382   27541 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:45:29.281660   27541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:29.281693   27541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:29.295543   27541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I0528 20:45:29.295855   27541 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:29.296192   27541 main.go:141] libmachine: Using API Version  1
	I0528 20:45:29.296205   27541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:29.296489   27541 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:29.296641   27541 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:45:29.296799   27541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:29.296817   27541 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:45:29.299276   27541 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:29.299713   27541 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:45:29.299746   27541 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:29.299845   27541 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:45:29.299973   27541 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:45:29.300096   27541 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:45:29.300240   27541 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	W0528 20:45:30.366009   27541 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:30.366065   27541 retry.go:31] will retry after 332.665745ms: dial tcp 192.168.39.239:22: connect: no route to host
	W0528 20:45:33.434023   27541 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.239:22: connect: no route to host
	W0528 20:45:33.434143   27541 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	E0528 20:45:33.434170   27541 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:33.434178   27541 status.go:257] ha-908878-m02 status: &{Name:ha-908878-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0528 20:45:33.434194   27541 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:33.434202   27541 status.go:255] checking status of ha-908878-m03 ...
	I0528 20:45:33.434511   27541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:33.434555   27541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:33.449266   27541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40197
	I0528 20:45:33.449705   27541 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:33.450168   27541 main.go:141] libmachine: Using API Version  1
	I0528 20:45:33.450182   27541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:33.450477   27541 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:33.450653   27541 main.go:141] libmachine: (ha-908878-m03) Calling .GetState
	I0528 20:45:33.452039   27541 status.go:330] ha-908878-m03 host status = "Running" (err=<nil>)
	I0528 20:45:33.452056   27541 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:33.452331   27541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:33.452373   27541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:33.466303   27541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I0528 20:45:33.466634   27541 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:33.467007   27541 main.go:141] libmachine: Using API Version  1
	I0528 20:45:33.467027   27541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:33.467317   27541 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:33.467460   27541 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:45:33.469993   27541 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:33.470331   27541 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:33.470366   27541 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:33.470520   27541 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:33.470802   27541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:33.470837   27541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:33.485160   27541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40961
	I0528 20:45:33.485539   27541 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:33.486075   27541 main.go:141] libmachine: Using API Version  1
	I0528 20:45:33.486098   27541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:33.486407   27541 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:33.486586   27541 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:45:33.486760   27541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:33.486777   27541 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:45:33.489157   27541 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:33.489531   27541 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:33.489555   27541 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:33.489716   27541 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:45:33.489887   27541 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:45:33.490074   27541 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:45:33.490191   27541 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:45:33.573887   27541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:33.589471   27541 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:45:33.589493   27541 api_server.go:166] Checking apiserver status ...
	I0528 20:45:33.589522   27541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:45:33.603733   27541 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup
	W0528 20:45:33.613748   27541 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:45:33.613819   27541 ssh_runner.go:195] Run: ls
	I0528 20:45:33.618615   27541 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:45:33.625805   27541 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:45:33.625825   27541 status.go:422] ha-908878-m03 apiserver status = Running (err=<nil>)
	I0528 20:45:33.625836   27541 status.go:257] ha-908878-m03 status: &{Name:ha-908878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:45:33.625853   27541 status.go:255] checking status of ha-908878-m04 ...
	I0528 20:45:33.626138   27541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:33.626181   27541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:33.642197   27541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0528 20:45:33.642563   27541 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:33.642984   27541 main.go:141] libmachine: Using API Version  1
	I0528 20:45:33.643002   27541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:33.643285   27541 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:33.643459   27541 main.go:141] libmachine: (ha-908878-m04) Calling .GetState
	I0528 20:45:33.645025   27541 status.go:330] ha-908878-m04 host status = "Running" (err=<nil>)
	I0528 20:45:33.645053   27541 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:33.645438   27541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:33.645483   27541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:33.659469   27541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44213
	I0528 20:45:33.659830   27541 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:33.660226   27541 main.go:141] libmachine: Using API Version  1
	I0528 20:45:33.660244   27541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:33.660535   27541 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:33.660741   27541 main.go:141] libmachine: (ha-908878-m04) Calling .GetIP
	I0528 20:45:33.663216   27541 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:33.663680   27541 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:33.663737   27541 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:33.663906   27541 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:33.664194   27541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:33.664223   27541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:33.678639   27541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I0528 20:45:33.678964   27541 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:33.679389   27541 main.go:141] libmachine: Using API Version  1
	I0528 20:45:33.679406   27541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:33.679705   27541 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:33.679884   27541 main.go:141] libmachine: (ha-908878-m04) Calling .DriverName
	I0528 20:45:33.680059   27541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:33.680077   27541 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHHostname
	I0528 20:45:33.682520   27541 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:33.683005   27541 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:33.683032   27541 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:33.683173   27541 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHPort
	I0528 20:45:33.683319   27541 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHKeyPath
	I0528 20:45:33.683489   27541 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHUsername
	I0528 20:45:33.683647   27541 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m04/id_rsa Username:docker}
	I0528 20:45:33.769513   27541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:33.784716   27541 status.go:257] ha-908878-m04 status: &{Name:ha-908878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr: exit status 3 (4.379842319s)

                                                
                                                
-- stdout --
	ha-908878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-908878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:45:35.789289   27641 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:45:35.789409   27641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:45:35.789424   27641 out.go:304] Setting ErrFile to fd 2...
	I0528 20:45:35.789430   27641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:45:35.789675   27641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:45:35.789928   27641 out.go:298] Setting JSON to false
	I0528 20:45:35.789968   27641 mustload.go:65] Loading cluster: ha-908878
	I0528 20:45:35.790112   27641 notify.go:220] Checking for updates...
	I0528 20:45:35.790476   27641 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:45:35.790498   27641 status.go:255] checking status of ha-908878 ...
	I0528 20:45:35.790995   27641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:35.791060   27641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:35.808108   27641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45631
	I0528 20:45:35.808512   27641 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:35.809062   27641 main.go:141] libmachine: Using API Version  1
	I0528 20:45:35.809088   27641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:35.809535   27641 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:35.809743   27641 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:45:35.811412   27641 status.go:330] ha-908878 host status = "Running" (err=<nil>)
	I0528 20:45:35.811425   27641 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:45:35.811689   27641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:35.811727   27641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:35.826674   27641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I0528 20:45:35.827025   27641 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:35.827532   27641 main.go:141] libmachine: Using API Version  1
	I0528 20:45:35.827564   27641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:35.827853   27641 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:35.828029   27641 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:45:35.830335   27641 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:35.830833   27641 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:45:35.830871   27641 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:35.830996   27641 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:45:35.831435   27641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:35.831481   27641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:35.846526   27641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32795
	I0528 20:45:35.846963   27641 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:35.847381   27641 main.go:141] libmachine: Using API Version  1
	I0528 20:45:35.847397   27641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:35.847696   27641 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:35.847860   27641 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:45:35.848021   27641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:35.848063   27641 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:45:35.850427   27641 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:35.850862   27641 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:45:35.850900   27641 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:35.851018   27641 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:45:35.851201   27641 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:45:35.851432   27641 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:45:35.851609   27641 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:45:35.934252   27641 ssh_runner.go:195] Run: systemctl --version
	I0528 20:45:35.940283   27641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:35.955742   27641 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:45:35.955784   27641 api_server.go:166] Checking apiserver status ...
	I0528 20:45:35.955821   27641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:45:35.969047   27641 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup
	W0528 20:45:35.978206   27641 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:45:35.978249   27641 ssh_runner.go:195] Run: ls
	I0528 20:45:35.982758   27641 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:45:35.986967   27641 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:45:35.986988   27641 status.go:422] ha-908878 apiserver status = Running (err=<nil>)
	I0528 20:45:35.987002   27641 status.go:257] ha-908878 status: &{Name:ha-908878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:45:35.987019   27641 status.go:255] checking status of ha-908878-m02 ...
	I0528 20:45:35.987281   27641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:35.987317   27641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:36.001972   27641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40535
	I0528 20:45:36.002404   27641 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:36.002843   27641 main.go:141] libmachine: Using API Version  1
	I0528 20:45:36.002867   27641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:36.003128   27641 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:36.003304   27641 main.go:141] libmachine: (ha-908878-m02) Calling .GetState
	I0528 20:45:36.004909   27641 status.go:330] ha-908878-m02 host status = "Running" (err=<nil>)
	I0528 20:45:36.004925   27641 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:45:36.005194   27641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:36.005229   27641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:36.019874   27641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34181
	I0528 20:45:36.020201   27641 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:36.020644   27641 main.go:141] libmachine: Using API Version  1
	I0528 20:45:36.020672   27641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:36.020990   27641 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:36.021167   27641 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:45:36.023671   27641 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:36.024101   27641 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:45:36.024126   27641 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:36.024286   27641 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:45:36.024561   27641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:36.024591   27641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:36.038489   27641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35029
	I0528 20:45:36.038835   27641 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:36.039243   27641 main.go:141] libmachine: Using API Version  1
	I0528 20:45:36.039264   27641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:36.039545   27641 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:36.039730   27641 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:45:36.039870   27641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:36.039886   27641 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:45:36.042453   27641 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:36.042882   27641 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:45:36.042915   27641 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:36.043039   27641 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:45:36.043196   27641 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:45:36.043316   27641 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:45:36.043419   27641 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	W0528 20:45:36.505957   27641 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:36.506007   27641 retry.go:31] will retry after 206.794204ms: dial tcp 192.168.39.239:22: connect: no route to host
	W0528 20:45:39.770020   27641 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.239:22: connect: no route to host
	W0528 20:45:39.770097   27641 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	E0528 20:45:39.770112   27641 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:39.770123   27641 status.go:257] ha-908878-m02 status: &{Name:ha-908878-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0528 20:45:39.770145   27641 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:39.770151   27641 status.go:255] checking status of ha-908878-m03 ...
	I0528 20:45:39.770442   27641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:39.770482   27641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:39.785239   27641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0528 20:45:39.785669   27641 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:39.786183   27641 main.go:141] libmachine: Using API Version  1
	I0528 20:45:39.786210   27641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:39.786492   27641 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:39.786668   27641 main.go:141] libmachine: (ha-908878-m03) Calling .GetState
	I0528 20:45:39.788136   27641 status.go:330] ha-908878-m03 host status = "Running" (err=<nil>)
	I0528 20:45:39.788153   27641 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:39.788449   27641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:39.788494   27641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:39.803973   27641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I0528 20:45:39.804328   27641 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:39.804731   27641 main.go:141] libmachine: Using API Version  1
	I0528 20:45:39.804751   27641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:39.805027   27641 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:39.805202   27641 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:45:39.807962   27641 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:39.808401   27641 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:39.808471   27641 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:39.808550   27641 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:39.808856   27641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:39.808919   27641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:39.824260   27641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40279
	I0528 20:45:39.824612   27641 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:39.824984   27641 main.go:141] libmachine: Using API Version  1
	I0528 20:45:39.825013   27641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:39.825326   27641 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:39.825503   27641 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:45:39.825678   27641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:39.825699   27641 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:45:39.828392   27641 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:39.828861   27641 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:39.828899   27641 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:39.829039   27641 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:45:39.829184   27641 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:45:39.829337   27641 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:45:39.829466   27641 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:45:39.914524   27641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:39.932990   27641 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:45:39.933016   27641 api_server.go:166] Checking apiserver status ...
	I0528 20:45:39.933052   27641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:45:39.949050   27641 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup
	W0528 20:45:39.960627   27641 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:45:39.960665   27641 ssh_runner.go:195] Run: ls
	I0528 20:45:39.964986   27641 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:45:39.970667   27641 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:45:39.970688   27641 status.go:422] ha-908878-m03 apiserver status = Running (err=<nil>)
	I0528 20:45:39.970697   27641 status.go:257] ha-908878-m03 status: &{Name:ha-908878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:45:39.970713   27641 status.go:255] checking status of ha-908878-m04 ...
	I0528 20:45:39.970991   27641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:39.971026   27641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:39.985112   27641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42463
	I0528 20:45:39.985465   27641 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:39.985977   27641 main.go:141] libmachine: Using API Version  1
	I0528 20:45:39.985996   27641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:39.986292   27641 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:39.986490   27641 main.go:141] libmachine: (ha-908878-m04) Calling .GetState
	I0528 20:45:39.988062   27641 status.go:330] ha-908878-m04 host status = "Running" (err=<nil>)
	I0528 20:45:39.988079   27641 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:39.988350   27641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:39.988396   27641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:40.002879   27641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41623
	I0528 20:45:40.003261   27641 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:40.003703   27641 main.go:141] libmachine: Using API Version  1
	I0528 20:45:40.003722   27641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:40.003942   27641 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:40.004130   27641 main.go:141] libmachine: (ha-908878-m04) Calling .GetIP
	I0528 20:45:40.006785   27641 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:40.007233   27641 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:40.007266   27641 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:40.007378   27641 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:40.007661   27641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:40.007700   27641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:40.021088   27641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I0528 20:45:40.021392   27641 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:40.021868   27641 main.go:141] libmachine: Using API Version  1
	I0528 20:45:40.021888   27641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:40.022190   27641 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:40.022378   27641 main.go:141] libmachine: (ha-908878-m04) Calling .DriverName
	I0528 20:45:40.022543   27641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:40.022565   27641 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHHostname
	I0528 20:45:40.025198   27641 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:40.025621   27641 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:40.025651   27641 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:40.025781   27641 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHPort
	I0528 20:45:40.025924   27641 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHKeyPath
	I0528 20:45:40.026078   27641 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHUsername
	I0528 20:45:40.026216   27641 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m04/id_rsa Username:docker}
	I0528 20:45:40.108950   27641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:40.123417   27641 status.go:257] ha-908878-m04 status: &{Name:ha-908878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr: exit status 3 (3.701880528s)

                                                
                                                
-- stdout --
	ha-908878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-908878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:45:42.612898   27757 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:45:42.613025   27757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:45:42.613034   27757 out.go:304] Setting ErrFile to fd 2...
	I0528 20:45:42.613038   27757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:45:42.613223   27757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:45:42.613413   27757 out.go:298] Setting JSON to false
	I0528 20:45:42.613442   27757 mustload.go:65] Loading cluster: ha-908878
	I0528 20:45:42.613477   27757 notify.go:220] Checking for updates...
	I0528 20:45:42.615297   27757 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:45:42.615348   27757 status.go:255] checking status of ha-908878 ...
	I0528 20:45:42.615740   27757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:42.615791   27757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:42.631820   27757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39897
	I0528 20:45:42.632344   27757 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:42.632888   27757 main.go:141] libmachine: Using API Version  1
	I0528 20:45:42.632912   27757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:42.633368   27757 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:42.633562   27757 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:45:42.635270   27757 status.go:330] ha-908878 host status = "Running" (err=<nil>)
	I0528 20:45:42.635288   27757 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:45:42.635666   27757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:42.635713   27757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:42.649599   27757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43767
	I0528 20:45:42.649955   27757 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:42.650323   27757 main.go:141] libmachine: Using API Version  1
	I0528 20:45:42.650339   27757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:42.650614   27757 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:42.650761   27757 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:45:42.653143   27757 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:42.653497   27757 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:45:42.653519   27757 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:42.653619   27757 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:45:42.653919   27757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:42.653955   27757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:42.667794   27757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0528 20:45:42.668097   27757 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:42.668514   27757 main.go:141] libmachine: Using API Version  1
	I0528 20:45:42.668534   27757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:42.668866   27757 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:42.669047   27757 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:45:42.669217   27757 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:42.669243   27757 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:45:42.671468   27757 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:42.671817   27757 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:45:42.671850   27757 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:42.672015   27757 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:45:42.672175   27757 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:45:42.672331   27757 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:45:42.672488   27757 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:45:42.755667   27757 ssh_runner.go:195] Run: systemctl --version
	I0528 20:45:42.761499   27757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:42.775067   27757 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:45:42.775090   27757 api_server.go:166] Checking apiserver status ...
	I0528 20:45:42.775115   27757 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:45:42.788094   27757 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup
	W0528 20:45:42.796729   27757 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:45:42.796774   27757 ssh_runner.go:195] Run: ls
	I0528 20:45:42.801028   27757 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:45:42.805141   27757 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:45:42.805162   27757 status.go:422] ha-908878 apiserver status = Running (err=<nil>)
	I0528 20:45:42.805179   27757 status.go:257] ha-908878 status: &{Name:ha-908878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:45:42.805200   27757 status.go:255] checking status of ha-908878-m02 ...
	I0528 20:45:42.805498   27757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:42.805528   27757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:42.820013   27757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0528 20:45:42.820415   27757 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:42.820907   27757 main.go:141] libmachine: Using API Version  1
	I0528 20:45:42.820927   27757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:42.821227   27757 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:42.821406   27757 main.go:141] libmachine: (ha-908878-m02) Calling .GetState
	I0528 20:45:42.822802   27757 status.go:330] ha-908878-m02 host status = "Running" (err=<nil>)
	I0528 20:45:42.822817   27757 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:45:42.823092   27757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:42.823129   27757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:42.837602   27757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0528 20:45:42.837945   27757 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:42.838411   27757 main.go:141] libmachine: Using API Version  1
	I0528 20:45:42.838432   27757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:42.838737   27757 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:42.838939   27757 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:45:42.841409   27757 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:42.841842   27757 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:45:42.841871   27757 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:42.842027   27757 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:45:42.842313   27757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:42.842345   27757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:42.856294   27757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0528 20:45:42.856686   27757 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:42.857132   27757 main.go:141] libmachine: Using API Version  1
	I0528 20:45:42.857151   27757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:42.857475   27757 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:42.857643   27757 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:45:42.857815   27757 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:42.857839   27757 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:45:42.860356   27757 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:42.860800   27757 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:45:42.860826   27757 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:42.860950   27757 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:45:42.861110   27757 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:45:42.861261   27757 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:45:42.861374   27757 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	W0528 20:45:45.917969   27757 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.239:22: connect: no route to host
	W0528 20:45:45.918081   27757 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	E0528 20:45:45.918108   27757 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:45.918118   27757 status.go:257] ha-908878-m02 status: &{Name:ha-908878-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0528 20:45:45.918146   27757 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:45.918154   27757 status.go:255] checking status of ha-908878-m03 ...
	I0528 20:45:45.918556   27757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:45.918596   27757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:45.933127   27757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39167
	I0528 20:45:45.933501   27757 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:45.934002   27757 main.go:141] libmachine: Using API Version  1
	I0528 20:45:45.934025   27757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:45.934338   27757 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:45.934498   27757 main.go:141] libmachine: (ha-908878-m03) Calling .GetState
	I0528 20:45:45.936015   27757 status.go:330] ha-908878-m03 host status = "Running" (err=<nil>)
	I0528 20:45:45.936032   27757 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:45.936365   27757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:45.936413   27757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:45.951500   27757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33075
	I0528 20:45:45.951845   27757 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:45.952300   27757 main.go:141] libmachine: Using API Version  1
	I0528 20:45:45.952319   27757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:45.952613   27757 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:45.952783   27757 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:45:45.955832   27757 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:45.956232   27757 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:45.956257   27757 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:45.956370   27757 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:45.956676   27757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:45.956710   27757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:45.971035   27757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I0528 20:45:45.971418   27757 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:45.971841   27757 main.go:141] libmachine: Using API Version  1
	I0528 20:45:45.971862   27757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:45.972168   27757 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:45.972324   27757 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:45:45.972510   27757 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:45.972531   27757 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:45:45.975114   27757 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:45.975541   27757 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:45.975570   27757 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:45.975680   27757 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:45:45.975824   27757 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:45:45.975937   27757 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:45:45.976034   27757 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:45:46.061417   27757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:46.078919   27757 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:45:46.078939   27757 api_server.go:166] Checking apiserver status ...
	I0528 20:45:46.078964   27757 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:45:46.093837   27757 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup
	W0528 20:45:46.112111   27757 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:45:46.112144   27757 ssh_runner.go:195] Run: ls
	I0528 20:45:46.116788   27757 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:45:46.120982   27757 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:45:46.121012   27757 status.go:422] ha-908878-m03 apiserver status = Running (err=<nil>)
	I0528 20:45:46.121024   27757 status.go:257] ha-908878-m03 status: &{Name:ha-908878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:45:46.121043   27757 status.go:255] checking status of ha-908878-m04 ...
	I0528 20:45:46.121354   27757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:46.121386   27757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:46.135946   27757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33343
	I0528 20:45:46.136308   27757 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:46.136728   27757 main.go:141] libmachine: Using API Version  1
	I0528 20:45:46.136751   27757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:46.137072   27757 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:46.137242   27757 main.go:141] libmachine: (ha-908878-m04) Calling .GetState
	I0528 20:45:46.138578   27757 status.go:330] ha-908878-m04 host status = "Running" (err=<nil>)
	I0528 20:45:46.138595   27757 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:46.138869   27757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:46.138912   27757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:46.152637   27757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39977
	I0528 20:45:46.152964   27757 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:46.153372   27757 main.go:141] libmachine: Using API Version  1
	I0528 20:45:46.153392   27757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:46.153648   27757 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:46.153830   27757 main.go:141] libmachine: (ha-908878-m04) Calling .GetIP
	I0528 20:45:46.156746   27757 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:46.157213   27757 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:46.157246   27757 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:46.157372   27757 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:46.157808   27757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:46.157851   27757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:46.171444   27757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36451
	I0528 20:45:46.171789   27757 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:46.172161   27757 main.go:141] libmachine: Using API Version  1
	I0528 20:45:46.172187   27757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:46.172531   27757 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:46.172684   27757 main.go:141] libmachine: (ha-908878-m04) Calling .DriverName
	I0528 20:45:46.172854   27757 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:46.172875   27757 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHHostname
	I0528 20:45:46.175433   27757 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:46.175858   27757 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:46.175897   27757 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:46.176065   27757 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHPort
	I0528 20:45:46.176193   27757 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHKeyPath
	I0528 20:45:46.176344   27757 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHUsername
	I0528 20:45:46.176439   27757 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m04/id_rsa Username:docker}
	I0528 20:45:46.260913   27757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:46.274982   27757 status.go:257] ha-908878-m04 status: &{Name:ha-908878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr: exit status 3 (3.730967395s)

                                                
                                                
-- stdout --
	ha-908878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-908878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:45:53.857502   27873 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:45:53.857748   27873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:45:53.857782   27873 out.go:304] Setting ErrFile to fd 2...
	I0528 20:45:53.857790   27873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:45:53.857963   27873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:45:53.858148   27873 out.go:298] Setting JSON to false
	I0528 20:45:53.858183   27873 mustload.go:65] Loading cluster: ha-908878
	I0528 20:45:53.858277   27873 notify.go:220] Checking for updates...
	I0528 20:45:53.858592   27873 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:45:53.858609   27873 status.go:255] checking status of ha-908878 ...
	I0528 20:45:53.858955   27873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:53.859025   27873 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:53.876813   27873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I0528 20:45:53.877247   27873 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:53.877774   27873 main.go:141] libmachine: Using API Version  1
	I0528 20:45:53.877833   27873 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:53.878176   27873 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:53.878396   27873 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:45:53.879973   27873 status.go:330] ha-908878 host status = "Running" (err=<nil>)
	I0528 20:45:53.879986   27873 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:45:53.880260   27873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:53.880292   27873 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:53.894204   27873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
	I0528 20:45:53.894554   27873 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:53.894989   27873 main.go:141] libmachine: Using API Version  1
	I0528 20:45:53.895011   27873 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:53.895324   27873 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:53.895504   27873 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:45:53.897901   27873 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:53.898405   27873 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:45:53.898432   27873 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:53.898544   27873 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:45:53.898825   27873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:53.898855   27873 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:53.913551   27873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33987
	I0528 20:45:53.913937   27873 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:53.914331   27873 main.go:141] libmachine: Using API Version  1
	I0528 20:45:53.914348   27873 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:53.914647   27873 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:53.914834   27873 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:45:53.915000   27873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:53.915023   27873 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:45:53.917800   27873 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:53.918270   27873 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:45:53.918301   27873 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:45:53.918450   27873 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:45:53.918617   27873 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:45:53.918786   27873 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:45:53.918945   27873 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:45:54.002181   27873 ssh_runner.go:195] Run: systemctl --version
	I0528 20:45:54.008944   27873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:54.025751   27873 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:45:54.025794   27873 api_server.go:166] Checking apiserver status ...
	I0528 20:45:54.025829   27873 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:45:54.041167   27873 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup
	W0528 20:45:54.055657   27873 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:45:54.055710   27873 ssh_runner.go:195] Run: ls
	I0528 20:45:54.060127   27873 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:45:54.069297   27873 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:45:54.069332   27873 status.go:422] ha-908878 apiserver status = Running (err=<nil>)
	I0528 20:45:54.069341   27873 status.go:257] ha-908878 status: &{Name:ha-908878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:45:54.069355   27873 status.go:255] checking status of ha-908878-m02 ...
	I0528 20:45:54.069624   27873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:54.069657   27873 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:54.085052   27873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I0528 20:45:54.085555   27873 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:54.086059   27873 main.go:141] libmachine: Using API Version  1
	I0528 20:45:54.086080   27873 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:54.086573   27873 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:54.086784   27873 main.go:141] libmachine: (ha-908878-m02) Calling .GetState
	I0528 20:45:54.088322   27873 status.go:330] ha-908878-m02 host status = "Running" (err=<nil>)
	I0528 20:45:54.088339   27873 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:45:54.088726   27873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:54.088775   27873 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:54.104025   27873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0528 20:45:54.104453   27873 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:54.104978   27873 main.go:141] libmachine: Using API Version  1
	I0528 20:45:54.105008   27873 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:54.105334   27873 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:54.105515   27873 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:45:54.108397   27873 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:54.108817   27873 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:45:54.108844   27873 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:54.108979   27873 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:45:54.109281   27873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:54.109313   27873 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:54.123460   27873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
	I0528 20:45:54.123837   27873 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:54.124246   27873 main.go:141] libmachine: Using API Version  1
	I0528 20:45:54.124260   27873 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:54.124507   27873 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:54.124663   27873 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:45:54.124822   27873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:54.124837   27873 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:45:54.127357   27873 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:54.127835   27873 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:45:54.127859   27873 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:45:54.128053   27873 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:45:54.128231   27873 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:45:54.128377   27873 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:45:54.128523   27873 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	W0528 20:45:57.178036   27873 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.239:22: connect: no route to host
	W0528 20:45:57.178120   27873 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	E0528 20:45:57.178132   27873 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:57.178140   27873 status.go:257] ha-908878-m02 status: &{Name:ha-908878-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0528 20:45:57.178173   27873 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	I0528 20:45:57.178180   27873 status.go:255] checking status of ha-908878-m03 ...
	I0528 20:45:57.178467   27873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:57.178511   27873 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:57.195760   27873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0528 20:45:57.196161   27873 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:57.196609   27873 main.go:141] libmachine: Using API Version  1
	I0528 20:45:57.196628   27873 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:57.196935   27873 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:57.197132   27873 main.go:141] libmachine: (ha-908878-m03) Calling .GetState
	I0528 20:45:57.198746   27873 status.go:330] ha-908878-m03 host status = "Running" (err=<nil>)
	I0528 20:45:57.198762   27873 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:57.199164   27873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:57.199209   27873 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:57.213411   27873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I0528 20:45:57.213831   27873 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:57.214328   27873 main.go:141] libmachine: Using API Version  1
	I0528 20:45:57.214346   27873 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:57.214604   27873 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:57.214810   27873 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:45:57.217150   27873 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:57.217555   27873 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:57.217582   27873 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:57.217702   27873 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:45:57.218022   27873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:57.218065   27873 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:57.232541   27873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40637
	I0528 20:45:57.232926   27873 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:57.233395   27873 main.go:141] libmachine: Using API Version  1
	I0528 20:45:57.233419   27873 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:57.233735   27873 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:57.233930   27873 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:45:57.234128   27873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:57.234151   27873 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:45:57.236878   27873 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:57.237283   27873 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:45:57.237311   27873 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:45:57.237461   27873 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:45:57.237645   27873 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:45:57.237816   27873 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:45:57.237965   27873 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:45:57.327598   27873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:57.347453   27873 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:45:57.347483   27873 api_server.go:166] Checking apiserver status ...
	I0528 20:45:57.347522   27873 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:45:57.365807   27873 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup
	W0528 20:45:57.375481   27873 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:45:57.375530   27873 ssh_runner.go:195] Run: ls
	I0528 20:45:57.380266   27873 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:45:57.384825   27873 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:45:57.384847   27873 status.go:422] ha-908878-m03 apiserver status = Running (err=<nil>)
	I0528 20:45:57.384857   27873 status.go:257] ha-908878-m03 status: &{Name:ha-908878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:45:57.384875   27873 status.go:255] checking status of ha-908878-m04 ...
	I0528 20:45:57.385304   27873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:57.385347   27873 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:57.400935   27873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41913
	I0528 20:45:57.401362   27873 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:57.401926   27873 main.go:141] libmachine: Using API Version  1
	I0528 20:45:57.401951   27873 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:57.402254   27873 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:57.402520   27873 main.go:141] libmachine: (ha-908878-m04) Calling .GetState
	I0528 20:45:57.404020   27873 status.go:330] ha-908878-m04 host status = "Running" (err=<nil>)
	I0528 20:45:57.404037   27873 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:57.404447   27873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:57.404485   27873 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:57.418870   27873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37293
	I0528 20:45:57.419222   27873 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:57.419700   27873 main.go:141] libmachine: Using API Version  1
	I0528 20:45:57.419723   27873 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:57.420032   27873 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:57.420249   27873 main.go:141] libmachine: (ha-908878-m04) Calling .GetIP
	I0528 20:45:57.423039   27873 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:57.423385   27873 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:57.423411   27873 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:57.423547   27873 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:45:57.423945   27873 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:45:57.423985   27873 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:45:57.438414   27873 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0528 20:45:57.438781   27873 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:45:57.439221   27873 main.go:141] libmachine: Using API Version  1
	I0528 20:45:57.439243   27873 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:45:57.439592   27873 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:45:57.439786   27873 main.go:141] libmachine: (ha-908878-m04) Calling .DriverName
	I0528 20:45:57.439954   27873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:45:57.439970   27873 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHHostname
	I0528 20:45:57.442598   27873 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:57.442959   27873 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:45:57.442983   27873 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:45:57.443112   27873 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHPort
	I0528 20:45:57.443277   27873 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHKeyPath
	I0528 20:45:57.443440   27873 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHUsername
	I0528 20:45:57.443604   27873 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m04/id_rsa Username:docker}
	I0528 20:45:57.529820   27873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:45:57.545849   27873 status.go:257] ha-908878-m04 status: &{Name:ha-908878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr: exit status 7 (630.56843ms)

                                                
                                                
-- stdout --
	ha-908878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-908878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:46:01.823979   28011 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:46:01.824642   28011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:46:01.824660   28011 out.go:304] Setting ErrFile to fd 2...
	I0528 20:46:01.824667   28011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:46:01.825106   28011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:46:01.825383   28011 out.go:298] Setting JSON to false
	I0528 20:46:01.825438   28011 mustload.go:65] Loading cluster: ha-908878
	I0528 20:46:01.825554   28011 notify.go:220] Checking for updates...
	I0528 20:46:01.826383   28011 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:46:01.826405   28011 status.go:255] checking status of ha-908878 ...
	I0528 20:46:01.826864   28011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:01.826913   28011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:01.841902   28011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39697
	I0528 20:46:01.842400   28011 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:01.842985   28011 main.go:141] libmachine: Using API Version  1
	I0528 20:46:01.843027   28011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:01.843336   28011 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:01.843498   28011 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:46:01.845411   28011 status.go:330] ha-908878 host status = "Running" (err=<nil>)
	I0528 20:46:01.845426   28011 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:46:01.845754   28011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:01.845817   28011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:01.860178   28011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35181
	I0528 20:46:01.860623   28011 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:01.861003   28011 main.go:141] libmachine: Using API Version  1
	I0528 20:46:01.861021   28011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:01.861319   28011 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:01.861514   28011 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:46:01.864588   28011 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:46:01.865041   28011 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:46:01.865066   28011 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:46:01.865196   28011 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:46:01.865562   28011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:01.865626   28011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:01.882354   28011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35591
	I0528 20:46:01.882761   28011 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:01.883226   28011 main.go:141] libmachine: Using API Version  1
	I0528 20:46:01.883242   28011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:01.883516   28011 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:01.883693   28011 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:46:01.883862   28011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:46:01.883881   28011 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:46:01.886627   28011 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:46:01.886984   28011 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:46:01.887011   28011 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:46:01.887152   28011 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:46:01.887318   28011 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:46:01.887458   28011 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:46:01.887574   28011 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:46:01.974431   28011 ssh_runner.go:195] Run: systemctl --version
	I0528 20:46:01.981661   28011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:46:01.998635   28011 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:46:01.998662   28011 api_server.go:166] Checking apiserver status ...
	I0528 20:46:01.998689   28011 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:46:02.014197   28011 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup
	W0528 20:46:02.024291   28011 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:46:02.024338   28011 ssh_runner.go:195] Run: ls
	I0528 20:46:02.038083   28011 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:46:02.042692   28011 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:46:02.042710   28011 status.go:422] ha-908878 apiserver status = Running (err=<nil>)
	I0528 20:46:02.042719   28011 status.go:257] ha-908878 status: &{Name:ha-908878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:46:02.042738   28011 status.go:255] checking status of ha-908878-m02 ...
	I0528 20:46:02.043055   28011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:02.043088   28011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:02.057990   28011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35643
	I0528 20:46:02.058312   28011 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:02.058719   28011 main.go:141] libmachine: Using API Version  1
	I0528 20:46:02.058739   28011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:02.059040   28011 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:02.059204   28011 main.go:141] libmachine: (ha-908878-m02) Calling .GetState
	I0528 20:46:02.060732   28011 status.go:330] ha-908878-m02 host status = "Stopped" (err=<nil>)
	I0528 20:46:02.060746   28011 status.go:343] host is not running, skipping remaining checks
	I0528 20:46:02.060752   28011 status.go:257] ha-908878-m02 status: &{Name:ha-908878-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:46:02.060770   28011 status.go:255] checking status of ha-908878-m03 ...
	I0528 20:46:02.061049   28011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:02.061079   28011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:02.075074   28011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35287
	I0528 20:46:02.075472   28011 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:02.075964   28011 main.go:141] libmachine: Using API Version  1
	I0528 20:46:02.075996   28011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:02.076272   28011 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:02.076433   28011 main.go:141] libmachine: (ha-908878-m03) Calling .GetState
	I0528 20:46:02.077958   28011 status.go:330] ha-908878-m03 host status = "Running" (err=<nil>)
	I0528 20:46:02.077972   28011 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:46:02.078237   28011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:02.078273   28011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:02.091976   28011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0528 20:46:02.092347   28011 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:02.092795   28011 main.go:141] libmachine: Using API Version  1
	I0528 20:46:02.092817   28011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:02.093131   28011 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:02.093332   28011 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:46:02.096167   28011 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:46:02.096585   28011 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:46:02.096613   28011 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:46:02.096704   28011 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:46:02.096988   28011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:02.097027   28011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:02.113661   28011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35695
	I0528 20:46:02.114028   28011 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:02.114458   28011 main.go:141] libmachine: Using API Version  1
	I0528 20:46:02.114485   28011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:02.114826   28011 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:02.114996   28011 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:46:02.115177   28011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:46:02.115194   28011 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:46:02.117820   28011 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:46:02.118272   28011 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:46:02.118294   28011 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:46:02.118451   28011 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:46:02.118605   28011 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:46:02.118739   28011 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:46:02.118839   28011 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:46:02.205893   28011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:46:02.223061   28011 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:46:02.223085   28011 api_server.go:166] Checking apiserver status ...
	I0528 20:46:02.223113   28011 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:46:02.237404   28011 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup
	W0528 20:46:02.246714   28011 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:46:02.246758   28011 ssh_runner.go:195] Run: ls
	I0528 20:46:02.250977   28011 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:46:02.255416   28011 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:46:02.255465   28011 status.go:422] ha-908878-m03 apiserver status = Running (err=<nil>)
	I0528 20:46:02.255475   28011 status.go:257] ha-908878-m03 status: &{Name:ha-908878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:46:02.255505   28011 status.go:255] checking status of ha-908878-m04 ...
	I0528 20:46:02.255782   28011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:02.255819   28011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:02.270469   28011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0528 20:46:02.270809   28011 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:02.271263   28011 main.go:141] libmachine: Using API Version  1
	I0528 20:46:02.271285   28011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:02.271565   28011 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:02.271767   28011 main.go:141] libmachine: (ha-908878-m04) Calling .GetState
	I0528 20:46:02.273196   28011 status.go:330] ha-908878-m04 host status = "Running" (err=<nil>)
	I0528 20:46:02.273211   28011 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:46:02.273477   28011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:02.273517   28011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:02.287674   28011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43801
	I0528 20:46:02.288036   28011 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:02.288401   28011 main.go:141] libmachine: Using API Version  1
	I0528 20:46:02.288422   28011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:02.288711   28011 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:02.288879   28011 main.go:141] libmachine: (ha-908878-m04) Calling .GetIP
	I0528 20:46:02.291215   28011 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:46:02.291569   28011 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:46:02.291589   28011 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:46:02.291790   28011 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:46:02.292053   28011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:02.292083   28011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:02.306390   28011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38753
	I0528 20:46:02.306774   28011 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:02.307200   28011 main.go:141] libmachine: Using API Version  1
	I0528 20:46:02.307221   28011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:02.307501   28011 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:02.307655   28011 main.go:141] libmachine: (ha-908878-m04) Calling .DriverName
	I0528 20:46:02.307806   28011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:46:02.307825   28011 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHHostname
	I0528 20:46:02.310472   28011 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:46:02.310885   28011 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:46:02.310905   28011 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:46:02.311036   28011 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHPort
	I0528 20:46:02.311175   28011 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHKeyPath
	I0528 20:46:02.311283   28011 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHUsername
	I0528 20:46:02.311428   28011 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m04/id_rsa Username:docker}
	I0528 20:46:02.397751   28011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:46:02.412862   28011 status.go:257] ha-908878-m04 status: &{Name:ha-908878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr: exit status 7 (616.583658ms)

                                                
                                                
-- stdout --
	ha-908878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-908878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:46:18.923789   28131 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:46:18.924270   28131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:46:18.924328   28131 out.go:304] Setting ErrFile to fd 2...
	I0528 20:46:18.924345   28131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:46:18.924740   28131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:46:18.925627   28131 out.go:298] Setting JSON to false
	I0528 20:46:18.925665   28131 mustload.go:65] Loading cluster: ha-908878
	I0528 20:46:18.925754   28131 notify.go:220] Checking for updates...
	I0528 20:46:18.925993   28131 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:46:18.926007   28131 status.go:255] checking status of ha-908878 ...
	I0528 20:46:18.926369   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:18.926427   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:18.945545   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I0528 20:46:18.946018   28131 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:18.946705   28131 main.go:141] libmachine: Using API Version  1
	I0528 20:46:18.946747   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:18.947120   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:18.947339   28131 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:46:18.949014   28131 status.go:330] ha-908878 host status = "Running" (err=<nil>)
	I0528 20:46:18.949031   28131 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:46:18.949438   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:18.949479   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:18.964571   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I0528 20:46:18.964993   28131 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:18.965431   28131 main.go:141] libmachine: Using API Version  1
	I0528 20:46:18.965454   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:18.965750   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:18.965944   28131 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:46:18.968723   28131 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:46:18.969109   28131 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:46:18.969143   28131 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:46:18.969227   28131 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:46:18.969637   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:18.969681   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:18.984024   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36805
	I0528 20:46:18.984398   28131 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:18.984802   28131 main.go:141] libmachine: Using API Version  1
	I0528 20:46:18.984819   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:18.985127   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:18.985306   28131 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:46:18.985494   28131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:46:18.985514   28131 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:46:18.988411   28131 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:46:18.988772   28131 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:46:18.988807   28131 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:46:18.988902   28131 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:46:18.989040   28131 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:46:18.989193   28131 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:46:18.989374   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:46:19.073678   28131 ssh_runner.go:195] Run: systemctl --version
	I0528 20:46:19.080105   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:46:19.094175   28131 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:46:19.094201   28131 api_server.go:166] Checking apiserver status ...
	I0528 20:46:19.094228   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:46:19.108235   28131 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup
	W0528 20:46:19.117830   28131 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1158/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:46:19.117883   28131 ssh_runner.go:195] Run: ls
	I0528 20:46:19.122182   28131 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:46:19.128054   28131 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:46:19.128083   28131 status.go:422] ha-908878 apiserver status = Running (err=<nil>)
	I0528 20:46:19.128110   28131 status.go:257] ha-908878 status: &{Name:ha-908878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:46:19.128134   28131 status.go:255] checking status of ha-908878-m02 ...
	I0528 20:46:19.128560   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:19.128629   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:19.143324   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44407
	I0528 20:46:19.143779   28131 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:19.144229   28131 main.go:141] libmachine: Using API Version  1
	I0528 20:46:19.144254   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:19.144550   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:19.144729   28131 main.go:141] libmachine: (ha-908878-m02) Calling .GetState
	I0528 20:46:19.146053   28131 status.go:330] ha-908878-m02 host status = "Stopped" (err=<nil>)
	I0528 20:46:19.146065   28131 status.go:343] host is not running, skipping remaining checks
	I0528 20:46:19.146070   28131 status.go:257] ha-908878-m02 status: &{Name:ha-908878-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:46:19.146085   28131 status.go:255] checking status of ha-908878-m03 ...
	I0528 20:46:19.146351   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:19.146380   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:19.160782   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0528 20:46:19.161217   28131 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:19.161696   28131 main.go:141] libmachine: Using API Version  1
	I0528 20:46:19.161718   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:19.162042   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:19.162226   28131 main.go:141] libmachine: (ha-908878-m03) Calling .GetState
	I0528 20:46:19.163671   28131 status.go:330] ha-908878-m03 host status = "Running" (err=<nil>)
	I0528 20:46:19.163687   28131 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:46:19.163969   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:19.164000   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:19.178454   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44269
	I0528 20:46:19.178870   28131 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:19.179330   28131 main.go:141] libmachine: Using API Version  1
	I0528 20:46:19.179353   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:19.179665   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:19.179831   28131 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:46:19.182891   28131 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:46:19.183285   28131 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:46:19.183308   28131 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:46:19.183443   28131 host.go:66] Checking if "ha-908878-m03" exists ...
	I0528 20:46:19.183782   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:19.183823   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:19.199864   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0528 20:46:19.200262   28131 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:19.200685   28131 main.go:141] libmachine: Using API Version  1
	I0528 20:46:19.200706   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:19.200985   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:19.201181   28131 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:46:19.201361   28131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:46:19.201381   28131 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:46:19.203848   28131 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:46:19.204339   28131 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:46:19.204364   28131 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:46:19.204451   28131 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:46:19.204625   28131 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:46:19.204790   28131 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:46:19.204951   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:46:19.289744   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:46:19.307096   28131 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:46:19.307120   28131 api_server.go:166] Checking apiserver status ...
	I0528 20:46:19.307150   28131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:46:19.322599   28131 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup
	W0528 20:46:19.332587   28131 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1538/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:46:19.332658   28131 ssh_runner.go:195] Run: ls
	I0528 20:46:19.337391   28131 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:46:19.341539   28131 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:46:19.341557   28131 status.go:422] ha-908878-m03 apiserver status = Running (err=<nil>)
	I0528 20:46:19.341565   28131 status.go:257] ha-908878-m03 status: &{Name:ha-908878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:46:19.341580   28131 status.go:255] checking status of ha-908878-m04 ...
	I0528 20:46:19.342004   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:19.342049   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:19.356650   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38465
	I0528 20:46:19.357126   28131 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:19.357596   28131 main.go:141] libmachine: Using API Version  1
	I0528 20:46:19.357619   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:19.357925   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:19.358118   28131 main.go:141] libmachine: (ha-908878-m04) Calling .GetState
	I0528 20:46:19.359744   28131 status.go:330] ha-908878-m04 host status = "Running" (err=<nil>)
	I0528 20:46:19.359760   28131 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:46:19.360140   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:19.360180   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:19.374458   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0528 20:46:19.374846   28131 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:19.375342   28131 main.go:141] libmachine: Using API Version  1
	I0528 20:46:19.375361   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:19.375642   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:19.375798   28131 main.go:141] libmachine: (ha-908878-m04) Calling .GetIP
	I0528 20:46:19.378506   28131 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:46:19.378917   28131 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:46:19.378956   28131 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:46:19.379064   28131 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:46:19.379336   28131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:19.379375   28131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:19.394361   28131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0528 20:46:19.394719   28131 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:19.395250   28131 main.go:141] libmachine: Using API Version  1
	I0528 20:46:19.395272   28131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:19.395583   28131 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:19.395773   28131 main.go:141] libmachine: (ha-908878-m04) Calling .DriverName
	I0528 20:46:19.395919   28131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:46:19.395940   28131 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHHostname
	I0528 20:46:19.398770   28131 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:46:19.399152   28131 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:46:19.399178   28131 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:46:19.399329   28131 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHPort
	I0528 20:46:19.399508   28131 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHKeyPath
	I0528 20:46:19.399672   28131 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHUsername
	I0528 20:46:19.399819   28131 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m04/id_rsa Username:docker}
	I0528 20:46:19.485373   28131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:46:19.499817   28131 status.go:257] ha-908878-m04 status: &{Name:ha-908878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-908878 -n ha-908878
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-908878 logs -n 25: (1.480044593s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878:/home/docker/cp-test_ha-908878-m03_ha-908878.txt                       |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878 sudo cat                                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m03_ha-908878.txt                                 |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m02:/home/docker/cp-test_ha-908878-m03_ha-908878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m02 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m03_ha-908878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04:/home/docker/cp-test_ha-908878-m03_ha-908878-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m04 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m03_ha-908878-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-908878 cp testdata/cp-test.txt                                                | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3657915045/001/cp-test_ha-908878-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878:/home/docker/cp-test_ha-908878-m04_ha-908878.txt                       |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878 sudo cat                                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m04_ha-908878.txt                                 |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m02:/home/docker/cp-test_ha-908878-m04_ha-908878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m02 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m04_ha-908878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03:/home/docker/cp-test_ha-908878-m04_ha-908878-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m03 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m04_ha-908878-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-908878 node stop m02 -v=7                                                     | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-908878 node start m02 -v=7                                                    | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 20:38:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 20:38:28.508057   22579 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:38:28.508200   22579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:38:28.508213   22579 out.go:304] Setting ErrFile to fd 2...
	I0528 20:38:28.508220   22579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:38:28.508582   22579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:38:28.509131   22579 out.go:298] Setting JSON to false
	I0528 20:38:28.510023   22579 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1251,"bootTime":1716927457,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 20:38:28.510074   22579 start.go:139] virtualization: kvm guest
	I0528 20:38:28.512253   22579 out.go:177] * [ha-908878] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 20:38:28.513529   22579 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 20:38:28.514717   22579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 20:38:28.513504   22579 notify.go:220] Checking for updates...
	I0528 20:38:28.517192   22579 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:38:28.518516   22579 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:38:28.519639   22579 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 20:38:28.520794   22579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 20:38:28.521958   22579 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 20:38:28.555938   22579 out.go:177] * Using the kvm2 driver based on user configuration
	I0528 20:38:28.557171   22579 start.go:297] selected driver: kvm2
	I0528 20:38:28.557193   22579 start.go:901] validating driver "kvm2" against <nil>
	I0528 20:38:28.557210   22579 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 20:38:28.557907   22579 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:38:28.558002   22579 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 20:38:28.573789   22579 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 20:38:28.573849   22579 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 20:38:28.574069   22579 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:38:28.574138   22579 cni.go:84] Creating CNI manager for ""
	I0528 20:38:28.574154   22579 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0528 20:38:28.574161   22579 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0528 20:38:28.574233   22579 start.go:340] cluster config:
	{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0528 20:38:28.574344   22579 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:38:28.576870   22579 out.go:177] * Starting "ha-908878" primary control-plane node in "ha-908878" cluster
	I0528 20:38:28.578026   22579 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:38:28.578060   22579 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 20:38:28.578071   22579 cache.go:56] Caching tarball of preloaded images
	I0528 20:38:28.578129   22579 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 20:38:28.578140   22579 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 20:38:28.578409   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:38:28.578427   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json: {Name:mk828cc9c3416b68ca79835683bb9902a90d34c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:28.578562   22579 start.go:360] acquireMachinesLock for ha-908878: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 20:38:28.578588   22579 start.go:364] duration metric: took 14.265µs to acquireMachinesLock for "ha-908878"
	I0528 20:38:28.578604   22579 start.go:93] Provisioning new machine with config: &{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:38:28.578659   22579 start.go:125] createHost starting for "" (driver="kvm2")
	I0528 20:38:28.580191   22579 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 20:38:28.580315   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:38:28.580355   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:38:28.594111   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35167
	I0528 20:38:28.594491   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:38:28.595027   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:38:28.595052   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:38:28.595338   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:38:28.595499   22579 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:38:28.595664   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:28.595774   22579 start.go:159] libmachine.API.Create for "ha-908878" (driver="kvm2")
	I0528 20:38:28.595809   22579 client.go:168] LocalClient.Create starting
	I0528 20:38:28.595848   22579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem
	I0528 20:38:28.595882   22579 main.go:141] libmachine: Decoding PEM data...
	I0528 20:38:28.595899   22579 main.go:141] libmachine: Parsing certificate...
	I0528 20:38:28.595957   22579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem
	I0528 20:38:28.595982   22579 main.go:141] libmachine: Decoding PEM data...
	I0528 20:38:28.595996   22579 main.go:141] libmachine: Parsing certificate...
	I0528 20:38:28.596012   22579 main.go:141] libmachine: Running pre-create checks...
	I0528 20:38:28.596021   22579 main.go:141] libmachine: (ha-908878) Calling .PreCreateCheck
	I0528 20:38:28.596395   22579 main.go:141] libmachine: (ha-908878) Calling .GetConfigRaw
	I0528 20:38:28.596722   22579 main.go:141] libmachine: Creating machine...
	I0528 20:38:28.596740   22579 main.go:141] libmachine: (ha-908878) Calling .Create
	I0528 20:38:28.596844   22579 main.go:141] libmachine: (ha-908878) Creating KVM machine...
	I0528 20:38:28.597973   22579 main.go:141] libmachine: (ha-908878) DBG | found existing default KVM network
	I0528 20:38:28.598602   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:28.598479   22602 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0528 20:38:28.598613   22579 main.go:141] libmachine: (ha-908878) DBG | created network xml: 
	I0528 20:38:28.598627   22579 main.go:141] libmachine: (ha-908878) DBG | <network>
	I0528 20:38:28.598634   22579 main.go:141] libmachine: (ha-908878) DBG |   <name>mk-ha-908878</name>
	I0528 20:38:28.598640   22579 main.go:141] libmachine: (ha-908878) DBG |   <dns enable='no'/>
	I0528 20:38:28.598646   22579 main.go:141] libmachine: (ha-908878) DBG |   
	I0528 20:38:28.598655   22579 main.go:141] libmachine: (ha-908878) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0528 20:38:28.598662   22579 main.go:141] libmachine: (ha-908878) DBG |     <dhcp>
	I0528 20:38:28.598683   22579 main.go:141] libmachine: (ha-908878) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0528 20:38:28.598691   22579 main.go:141] libmachine: (ha-908878) DBG |     </dhcp>
	I0528 20:38:28.598718   22579 main.go:141] libmachine: (ha-908878) DBG |   </ip>
	I0528 20:38:28.598735   22579 main.go:141] libmachine: (ha-908878) DBG |   
	I0528 20:38:28.598744   22579 main.go:141] libmachine: (ha-908878) DBG | </network>
	I0528 20:38:28.598751   22579 main.go:141] libmachine: (ha-908878) DBG | 
	I0528 20:38:28.603635   22579 main.go:141] libmachine: (ha-908878) DBG | trying to create private KVM network mk-ha-908878 192.168.39.0/24...
	I0528 20:38:28.665930   22579 main.go:141] libmachine: (ha-908878) DBG | private KVM network mk-ha-908878 192.168.39.0/24 created
	I0528 20:38:28.665964   22579 main.go:141] libmachine: (ha-908878) Setting up store path in /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878 ...
	I0528 20:38:28.665977   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:28.665899   22602 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:38:28.665995   22579 main.go:141] libmachine: (ha-908878) Building disk image from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 20:38:28.666062   22579 main.go:141] libmachine: (ha-908878) Downloading /home/jenkins/minikube-integration/18966-3963/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 20:38:28.894340   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:28.894229   22602 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa...
	I0528 20:38:28.954571   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:28.954484   22602 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/ha-908878.rawdisk...
	I0528 20:38:28.954612   22579 main.go:141] libmachine: (ha-908878) DBG | Writing magic tar header
	I0528 20:38:28.954624   22579 main.go:141] libmachine: (ha-908878) DBG | Writing SSH key tar header
	I0528 20:38:28.954648   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:28.954607   22602 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878 ...
	I0528 20:38:28.954758   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878
	I0528 20:38:28.954782   22579 main.go:141] libmachine: (ha-908878) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878 (perms=drwx------)
	I0528 20:38:28.954790   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines
	I0528 20:38:28.954805   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:38:28.954816   22579 main.go:141] libmachine: (ha-908878) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines (perms=drwxr-xr-x)
	I0528 20:38:28.954829   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963
	I0528 20:38:28.954841   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0528 20:38:28.954849   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home/jenkins
	I0528 20:38:28.954855   22579 main.go:141] libmachine: (ha-908878) DBG | Checking permissions on dir: /home
	I0528 20:38:28.954860   22579 main.go:141] libmachine: (ha-908878) DBG | Skipping /home - not owner
	I0528 20:38:28.954872   22579 main.go:141] libmachine: (ha-908878) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube (perms=drwxr-xr-x)
	I0528 20:38:28.954885   22579 main.go:141] libmachine: (ha-908878) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963 (perms=drwxrwxr-x)
	I0528 20:38:28.954902   22579 main.go:141] libmachine: (ha-908878) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0528 20:38:28.954915   22579 main.go:141] libmachine: (ha-908878) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0528 20:38:28.954922   22579 main.go:141] libmachine: (ha-908878) Creating domain...
	I0528 20:38:28.956073   22579 main.go:141] libmachine: (ha-908878) define libvirt domain using xml: 
	I0528 20:38:28.956095   22579 main.go:141] libmachine: (ha-908878) <domain type='kvm'>
	I0528 20:38:28.956100   22579 main.go:141] libmachine: (ha-908878)   <name>ha-908878</name>
	I0528 20:38:28.956105   22579 main.go:141] libmachine: (ha-908878)   <memory unit='MiB'>2200</memory>
	I0528 20:38:28.956110   22579 main.go:141] libmachine: (ha-908878)   <vcpu>2</vcpu>
	I0528 20:38:28.956117   22579 main.go:141] libmachine: (ha-908878)   <features>
	I0528 20:38:28.956122   22579 main.go:141] libmachine: (ha-908878)     <acpi/>
	I0528 20:38:28.956126   22579 main.go:141] libmachine: (ha-908878)     <apic/>
	I0528 20:38:28.956131   22579 main.go:141] libmachine: (ha-908878)     <pae/>
	I0528 20:38:28.956148   22579 main.go:141] libmachine: (ha-908878)     
	I0528 20:38:28.956161   22579 main.go:141] libmachine: (ha-908878)   </features>
	I0528 20:38:28.956168   22579 main.go:141] libmachine: (ha-908878)   <cpu mode='host-passthrough'>
	I0528 20:38:28.956178   22579 main.go:141] libmachine: (ha-908878)   
	I0528 20:38:28.956185   22579 main.go:141] libmachine: (ha-908878)   </cpu>
	I0528 20:38:28.956192   22579 main.go:141] libmachine: (ha-908878)   <os>
	I0528 20:38:28.956202   22579 main.go:141] libmachine: (ha-908878)     <type>hvm</type>
	I0528 20:38:28.956208   22579 main.go:141] libmachine: (ha-908878)     <boot dev='cdrom'/>
	I0528 20:38:28.956212   22579 main.go:141] libmachine: (ha-908878)     <boot dev='hd'/>
	I0528 20:38:28.956218   22579 main.go:141] libmachine: (ha-908878)     <bootmenu enable='no'/>
	I0528 20:38:28.956228   22579 main.go:141] libmachine: (ha-908878)   </os>
	I0528 20:38:28.956240   22579 main.go:141] libmachine: (ha-908878)   <devices>
	I0528 20:38:28.956257   22579 main.go:141] libmachine: (ha-908878)     <disk type='file' device='cdrom'>
	I0528 20:38:28.956272   22579 main.go:141] libmachine: (ha-908878)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/boot2docker.iso'/>
	I0528 20:38:28.956283   22579 main.go:141] libmachine: (ha-908878)       <target dev='hdc' bus='scsi'/>
	I0528 20:38:28.956294   22579 main.go:141] libmachine: (ha-908878)       <readonly/>
	I0528 20:38:28.956301   22579 main.go:141] libmachine: (ha-908878)     </disk>
	I0528 20:38:28.956329   22579 main.go:141] libmachine: (ha-908878)     <disk type='file' device='disk'>
	I0528 20:38:28.956355   22579 main.go:141] libmachine: (ha-908878)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0528 20:38:28.956372   22579 main.go:141] libmachine: (ha-908878)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/ha-908878.rawdisk'/>
	I0528 20:38:28.956383   22579 main.go:141] libmachine: (ha-908878)       <target dev='hda' bus='virtio'/>
	I0528 20:38:28.956396   22579 main.go:141] libmachine: (ha-908878)     </disk>
	I0528 20:38:28.956407   22579 main.go:141] libmachine: (ha-908878)     <interface type='network'>
	I0528 20:38:28.956427   22579 main.go:141] libmachine: (ha-908878)       <source network='mk-ha-908878'/>
	I0528 20:38:28.956443   22579 main.go:141] libmachine: (ha-908878)       <model type='virtio'/>
	I0528 20:38:28.956459   22579 main.go:141] libmachine: (ha-908878)     </interface>
	I0528 20:38:28.956475   22579 main.go:141] libmachine: (ha-908878)     <interface type='network'>
	I0528 20:38:28.956488   22579 main.go:141] libmachine: (ha-908878)       <source network='default'/>
	I0528 20:38:28.956499   22579 main.go:141] libmachine: (ha-908878)       <model type='virtio'/>
	I0528 20:38:28.956509   22579 main.go:141] libmachine: (ha-908878)     </interface>
	I0528 20:38:28.956516   22579 main.go:141] libmachine: (ha-908878)     <serial type='pty'>
	I0528 20:38:28.956527   22579 main.go:141] libmachine: (ha-908878)       <target port='0'/>
	I0528 20:38:28.956536   22579 main.go:141] libmachine: (ha-908878)     </serial>
	I0528 20:38:28.956555   22579 main.go:141] libmachine: (ha-908878)     <console type='pty'>
	I0528 20:38:28.956567   22579 main.go:141] libmachine: (ha-908878)       <target type='serial' port='0'/>
	I0528 20:38:28.956602   22579 main.go:141] libmachine: (ha-908878)     </console>
	I0528 20:38:28.956627   22579 main.go:141] libmachine: (ha-908878)     <rng model='virtio'>
	I0528 20:38:28.956637   22579 main.go:141] libmachine: (ha-908878)       <backend model='random'>/dev/random</backend>
	I0528 20:38:28.956695   22579 main.go:141] libmachine: (ha-908878)     </rng>
	I0528 20:38:28.956707   22579 main.go:141] libmachine: (ha-908878)     
	I0528 20:38:28.956714   22579 main.go:141] libmachine: (ha-908878)     
	I0528 20:38:28.956722   22579 main.go:141] libmachine: (ha-908878)   </devices>
	I0528 20:38:28.956733   22579 main.go:141] libmachine: (ha-908878) </domain>
	I0528 20:38:28.956742   22579 main.go:141] libmachine: (ha-908878) 
	I0528 20:38:28.960610   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:ea:b9:f9 in network default
	I0528 20:38:28.961119   22579 main.go:141] libmachine: (ha-908878) Ensuring networks are active...
	I0528 20:38:28.961134   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:28.961802   22579 main.go:141] libmachine: (ha-908878) Ensuring network default is active
	I0528 20:38:28.962108   22579 main.go:141] libmachine: (ha-908878) Ensuring network mk-ha-908878 is active
	I0528 20:38:28.962636   22579 main.go:141] libmachine: (ha-908878) Getting domain xml...
	I0528 20:38:28.963400   22579 main.go:141] libmachine: (ha-908878) Creating domain...
	I0528 20:38:30.122597   22579 main.go:141] libmachine: (ha-908878) Waiting to get IP...
	I0528 20:38:30.123378   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:30.123741   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:30.123764   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:30.123716   22602 retry.go:31] will retry after 239.467208ms: waiting for machine to come up
	I0528 20:38:30.365210   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:30.365776   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:30.365806   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:30.365717   22602 retry.go:31] will retry after 260.357194ms: waiting for machine to come up
	I0528 20:38:30.627156   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:30.627558   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:30.627587   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:30.627511   22602 retry.go:31] will retry after 315.484937ms: waiting for machine to come up
	I0528 20:38:30.944936   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:30.945401   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:30.945419   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:30.945362   22602 retry.go:31] will retry after 403.722417ms: waiting for machine to come up
	I0528 20:38:31.351165   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:31.351582   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:31.351618   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:31.351558   22602 retry.go:31] will retry after 705.789161ms: waiting for machine to come up
	I0528 20:38:32.058483   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:32.058911   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:32.058938   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:32.058845   22602 retry.go:31] will retry after 853.06609ms: waiting for machine to come up
	I0528 20:38:32.913390   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:32.913788   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:32.913830   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:32.913698   22602 retry.go:31] will retry after 930.199316ms: waiting for machine to come up
	I0528 20:38:33.845161   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:33.845714   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:33.845753   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:33.845660   22602 retry.go:31] will retry after 1.45078343s: waiting for machine to come up
	I0528 20:38:35.298107   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:35.298584   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:35.298611   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:35.298533   22602 retry.go:31] will retry after 1.507467761s: waiting for machine to come up
	I0528 20:38:36.808111   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:36.808497   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:36.808519   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:36.808461   22602 retry.go:31] will retry after 1.96576782s: waiting for machine to come up
	I0528 20:38:38.775422   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:38.775838   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:38.775867   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:38.775781   22602 retry.go:31] will retry after 1.786105039s: waiting for machine to come up
	I0528 20:38:40.564563   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:40.564971   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:40.565005   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:40.564941   22602 retry.go:31] will retry after 3.177899355s: waiting for machine to come up
	I0528 20:38:43.744675   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:43.745084   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find current IP address of domain ha-908878 in network mk-ha-908878
	I0528 20:38:43.745107   22579 main.go:141] libmachine: (ha-908878) DBG | I0528 20:38:43.745033   22602 retry.go:31] will retry after 4.318254436s: waiting for machine to come up
	I0528 20:38:48.064298   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.064765   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has current primary IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.064795   22579 main.go:141] libmachine: (ha-908878) Found IP for machine: 192.168.39.100
	I0528 20:38:48.064809   22579 main.go:141] libmachine: (ha-908878) Reserving static IP address...
	I0528 20:38:48.065123   22579 main.go:141] libmachine: (ha-908878) DBG | unable to find host DHCP lease matching {name: "ha-908878", mac: "52:54:00:bc:73:cb", ip: "192.168.39.100"} in network mk-ha-908878
	I0528 20:38:48.136166   22579 main.go:141] libmachine: (ha-908878) DBG | Getting to WaitForSSH function...
	I0528 20:38:48.136194   22579 main.go:141] libmachine: (ha-908878) Reserved static IP address: 192.168.39.100
	I0528 20:38:48.136255   22579 main.go:141] libmachine: (ha-908878) Waiting for SSH to be available...
	I0528 20:38:48.138625   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.139099   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.139124   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.139358   22579 main.go:141] libmachine: (ha-908878) DBG | Using SSH client type: external
	I0528 20:38:48.139388   22579 main.go:141] libmachine: (ha-908878) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa (-rw-------)
	I0528 20:38:48.139441   22579 main.go:141] libmachine: (ha-908878) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 20:38:48.139460   22579 main.go:141] libmachine: (ha-908878) DBG | About to run SSH command:
	I0528 20:38:48.139480   22579 main.go:141] libmachine: (ha-908878) DBG | exit 0
	I0528 20:38:48.265512   22579 main.go:141] libmachine: (ha-908878) DBG | SSH cmd err, output: <nil>: 
	I0528 20:38:48.265775   22579 main.go:141] libmachine: (ha-908878) KVM machine creation complete!
	I0528 20:38:48.266075   22579 main.go:141] libmachine: (ha-908878) Calling .GetConfigRaw
	I0528 20:38:48.266535   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:48.266734   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:48.266881   22579 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0528 20:38:48.266894   22579 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:38:48.268080   22579 main.go:141] libmachine: Detecting operating system of created instance...
	I0528 20:38:48.268092   22579 main.go:141] libmachine: Waiting for SSH to be available...
	I0528 20:38:48.268102   22579 main.go:141] libmachine: Getting to WaitForSSH function...
	I0528 20:38:48.268108   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:48.270260   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.270559   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.270598   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.270668   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:48.270813   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.270951   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.271067   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:48.271194   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:38:48.271358   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:38:48.271369   22579 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0528 20:38:48.376611   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:38:48.376634   22579 main.go:141] libmachine: Detecting the provisioner...
	I0528 20:38:48.376643   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:48.379304   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.379651   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.379684   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.379771   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:48.379955   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.380110   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.380271   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:48.380435   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:38:48.380644   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:38:48.380661   22579 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0528 20:38:48.489958   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0528 20:38:48.490049   22579 main.go:141] libmachine: found compatible host: buildroot
	I0528 20:38:48.490065   22579 main.go:141] libmachine: Provisioning with buildroot...
	I0528 20:38:48.490077   22579 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:38:48.490291   22579 buildroot.go:166] provisioning hostname "ha-908878"
	I0528 20:38:48.490314   22579 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:38:48.490462   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:48.492870   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.493158   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.493196   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.493290   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:48.493469   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.493622   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.493772   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:48.493895   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:38:48.494099   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:38:48.494115   22579 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-908878 && echo "ha-908878" | sudo tee /etc/hostname
	I0528 20:38:48.615192   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-908878
	
	I0528 20:38:48.615213   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:48.617637   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.617972   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.617998   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.618145   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:48.618340   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.618503   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.618640   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:48.618779   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:38:48.618918   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:38:48.618933   22579 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-908878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-908878/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-908878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 20:38:48.733892   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:38:48.733916   22579 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 20:38:48.733945   22579 buildroot.go:174] setting up certificates
	I0528 20:38:48.733958   22579 provision.go:84] configureAuth start
	I0528 20:38:48.733974   22579 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:38:48.734211   22579 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:38:48.736486   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.736765   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.736787   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.736920   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:48.738949   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.739282   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.739306   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.739421   22579 provision.go:143] copyHostCerts
	I0528 20:38:48.739452   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:38:48.739482   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 20:38:48.739494   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:38:48.739554   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 20:38:48.739634   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:38:48.739651   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 20:38:48.739657   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:38:48.739681   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 20:38:48.739732   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:38:48.739753   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 20:38:48.739760   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:38:48.739780   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 20:38:48.739835   22579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.ha-908878 san=[127.0.0.1 192.168.39.100 ha-908878 localhost minikube]
	I0528 20:38:48.984696   22579 provision.go:177] copyRemoteCerts
	I0528 20:38:48.984750   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 20:38:48.984771   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:48.987414   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.987713   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:48.987737   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:48.987932   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:48.988125   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:48.988391   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:48.988533   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:38:49.075941   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0528 20:38:49.075995   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 20:38:49.099179   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0528 20:38:49.099223   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0528 20:38:49.121756   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0528 20:38:49.121819   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 20:38:49.144028   22579 provision.go:87] duration metric: took 410.05864ms to configureAuth
	I0528 20:38:49.144046   22579 buildroot.go:189] setting minikube options for container-runtime
	I0528 20:38:49.144200   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:38:49.144289   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:49.146775   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.147067   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.147090   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.147223   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:49.147410   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.147585   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.147711   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:49.147880   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:38:49.148087   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:38:49.148114   22579 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 20:38:49.420792   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 20:38:49.420821   22579 main.go:141] libmachine: Checking connection to Docker...
	I0528 20:38:49.420831   22579 main.go:141] libmachine: (ha-908878) Calling .GetURL
	I0528 20:38:49.422176   22579 main.go:141] libmachine: (ha-908878) DBG | Using libvirt version 6000000
	I0528 20:38:49.424073   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.424362   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.424394   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.424516   22579 main.go:141] libmachine: Docker is up and running!
	I0528 20:38:49.424531   22579 main.go:141] libmachine: Reticulating splines...
	I0528 20:38:49.424539   22579 client.go:171] duration metric: took 20.828718668s to LocalClient.Create
	I0528 20:38:49.424566   22579 start.go:167] duration metric: took 20.828790777s to libmachine.API.Create "ha-908878"
	I0528 20:38:49.424578   22579 start.go:293] postStartSetup for "ha-908878" (driver="kvm2")
	I0528 20:38:49.424592   22579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 20:38:49.424614   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:49.424841   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 20:38:49.424861   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:49.426765   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.427217   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.427240   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.427340   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:49.427485   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.427633   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:49.427818   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:38:49.511709   22579 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 20:38:49.515889   22579 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 20:38:49.515913   22579 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 20:38:49.515977   22579 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 20:38:49.516088   22579 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 20:38:49.516100   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /etc/ssl/certs/117602.pem
	I0528 20:38:49.516215   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 20:38:49.525404   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:38:49.547425   22579 start.go:296] duration metric: took 122.835572ms for postStartSetup
	I0528 20:38:49.547461   22579 main.go:141] libmachine: (ha-908878) Calling .GetConfigRaw
	I0528 20:38:49.547931   22579 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:38:49.551167   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.551493   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.551517   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.551723   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:38:49.551870   22579 start.go:128] duration metric: took 20.973203625s to createHost
	I0528 20:38:49.551889   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:49.553803   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.554072   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.554099   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.554191   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:49.554357   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.554512   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.554648   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:49.554804   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:38:49.554956   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:38:49.554966   22579 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 20:38:49.662123   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716928729.634172717
	
	I0528 20:38:49.662140   22579 fix.go:216] guest clock: 1716928729.634172717
	I0528 20:38:49.662147   22579 fix.go:229] Guest: 2024-05-28 20:38:49.634172717 +0000 UTC Remote: 2024-05-28 20:38:49.551880955 +0000 UTC m=+21.076168656 (delta=82.291762ms)
	I0528 20:38:49.662164   22579 fix.go:200] guest clock delta is within tolerance: 82.291762ms
	I0528 20:38:49.662169   22579 start.go:83] releasing machines lock for "ha-908878", held for 21.083572545s
	I0528 20:38:49.662183   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:49.662408   22579 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:38:49.664697   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.665028   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.665052   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.665198   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:49.665658   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:49.665868   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:38:49.665963   22579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 20:38:49.666008   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:49.666112   22579 ssh_runner.go:195] Run: cat /version.json
	I0528 20:38:49.666135   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:38:49.668578   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.668711   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.668899   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.668918   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.669027   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:49.669166   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:49.669173   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.669192   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:49.669306   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:49.669371   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:38:49.669454   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:38:49.669528   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:38:49.669654   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:38:49.669823   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:38:49.785714   22579 ssh_runner.go:195] Run: systemctl --version
	I0528 20:38:49.791431   22579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 20:38:49.946535   22579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 20:38:49.952778   22579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 20:38:49.952841   22579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 20:38:49.967958   22579 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 20:38:49.967974   22579 start.go:494] detecting cgroup driver to use...
	I0528 20:38:49.968032   22579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 20:38:49.983154   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 20:38:49.996248   22579 docker.go:217] disabling cri-docker service (if available) ...
	I0528 20:38:49.996292   22579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 20:38:50.009245   22579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 20:38:50.021833   22579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 20:38:50.132329   22579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 20:38:50.281366   22579 docker.go:233] disabling docker service ...
	I0528 20:38:50.281445   22579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 20:38:50.295507   22579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 20:38:50.308570   22579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 20:38:50.425719   22579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 20:38:50.542751   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 20:38:50.556721   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 20:38:50.574447   22579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 20:38:50.574511   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.584319   22579 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 20:38:50.584363   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.594409   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.604233   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.614035   22579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 20:38:50.624113   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.633783   22579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.650029   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:38:50.659849   22579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 20:38:50.668562   22579 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 20:38:50.668594   22579 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 20:38:50.680820   22579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 20:38:50.690010   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:38:50.803010   22579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 20:38:50.931454   22579 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 20:38:50.931531   22579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 20:38:50.936715   22579 start.go:562] Will wait 60s for crictl version
	I0528 20:38:50.936767   22579 ssh_runner.go:195] Run: which crictl
	I0528 20:38:50.940639   22579 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 20:38:50.978739   22579 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 20:38:50.978812   22579 ssh_runner.go:195] Run: crio --version
	I0528 20:38:51.005021   22579 ssh_runner.go:195] Run: crio --version
	I0528 20:38:51.035112   22579 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 20:38:51.036486   22579 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:38:51.038790   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:51.039119   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:38:51.039140   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:38:51.039303   22579 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 20:38:51.043414   22579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:38:51.056018   22579 kubeadm.go:877] updating cluster {Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 20:38:51.056109   22579 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:38:51.056147   22579 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 20:38:51.087184   22579 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0528 20:38:51.087233   22579 ssh_runner.go:195] Run: which lz4
	I0528 20:38:51.091162   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0528 20:38:51.091273   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 20:38:51.095372   22579 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 20:38:51.095400   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0528 20:38:52.434056   22579 crio.go:462] duration metric: took 1.342826793s to copy over tarball
	I0528 20:38:52.434148   22579 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 20:38:54.508765   22579 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.074580937s)
	I0528 20:38:54.508794   22579 crio.go:469] duration metric: took 2.074713225s to extract the tarball
	I0528 20:38:54.508800   22579 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 20:38:54.545376   22579 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 20:38:54.588637   22579 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 20:38:54.588657   22579 cache_images.go:84] Images are preloaded, skipping loading
	I0528 20:38:54.588664   22579 kubeadm.go:928] updating node { 192.168.39.100 8443 v1.30.1 crio true true} ...
	I0528 20:38:54.588754   22579 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-908878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 20:38:54.588815   22579 ssh_runner.go:195] Run: crio config
	I0528 20:38:54.642509   22579 cni.go:84] Creating CNI manager for ""
	I0528 20:38:54.642526   22579 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0528 20:38:54.642535   22579 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 20:38:54.642553   22579 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-908878 NodeName:ha-908878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 20:38:54.642666   22579 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-908878"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 20:38:54.642687   22579 kube-vip.go:115] generating kube-vip config ...
	I0528 20:38:54.642725   22579 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 20:38:54.660351   22579 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 20:38:54.660473   22579 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0528 20:38:54.660537   22579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 20:38:54.670336   22579 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 20:38:54.670394   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0528 20:38:54.679560   22579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0528 20:38:54.695475   22579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 20:38:54.710820   22579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0528 20:38:54.726283   22579 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0528 20:38:54.742192   22579 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0528 20:38:54.745729   22579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:38:54.757819   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:38:54.876320   22579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:38:54.892785   22579 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878 for IP: 192.168.39.100
	I0528 20:38:54.892803   22579 certs.go:194] generating shared ca certs ...
	I0528 20:38:54.892817   22579 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:54.892971   22579 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 20:38:54.893009   22579 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 20:38:54.893019   22579 certs.go:256] generating profile certs ...
	I0528 20:38:54.893061   22579 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key
	I0528 20:38:54.893074   22579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.crt with IP's: []
	I0528 20:38:54.965324   22579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.crt ...
	I0528 20:38:54.965348   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.crt: {Name:mk04662cee3162313797f69f105fd22fa987f6b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:54.965538   22579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key ...
	I0528 20:38:54.965553   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key: {Name:mk1af1e1f86c54769b7fe70d345e0cd7ccf018c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:54.965633   22579 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.c4f31d45
	I0528 20:38:54.965648   22579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.c4f31d45 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.254]
	I0528 20:38:55.548317   22579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.c4f31d45 ...
	I0528 20:38:55.548343   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.c4f31d45: {Name:mkd40d2038fb3fdfc8b37af76ff3afaefb2368e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:55.548513   22579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.c4f31d45 ...
	I0528 20:38:55.548530   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.c4f31d45: {Name:mk8b133081a94b50973c4cf69bd7e8393e52a09c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:55.548630   22579 certs.go:381] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.c4f31d45 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt
	I0528 20:38:55.548718   22579 certs.go:385] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.c4f31d45 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key
	I0528 20:38:55.548778   22579 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key
	I0528 20:38:55.548794   22579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt with IP's: []
	I0528 20:38:55.595371   22579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt ...
	I0528 20:38:55.595395   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt: {Name:mk74e6fe33213c1f2ad92f1d4eda4579c8e53eec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:55.595538   22579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key ...
	I0528 20:38:55.595551   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key: {Name:mk5dd9209bc6457e3b260fb1bf0944035f78220d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:38:55.595638   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 20:38:55.595656   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0528 20:38:55.595668   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 20:38:55.595680   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 20:38:55.595690   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 20:38:55.595702   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 20:38:55.595711   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 20:38:55.595723   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 20:38:55.595804   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 20:38:55.595841   22579 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 20:38:55.595851   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 20:38:55.595870   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 20:38:55.595895   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 20:38:55.595915   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 20:38:55.595958   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:38:55.595983   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:38:55.595996   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem -> /usr/share/ca-certificates/11760.pem
	I0528 20:38:55.596005   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /usr/share/ca-certificates/117602.pem
	I0528 20:38:55.596498   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 20:38:55.622611   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 20:38:55.646231   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 20:38:55.674555   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 20:38:55.703161   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 20:38:55.725075   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 20:38:55.748465   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 20:38:55.771076   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 20:38:55.793745   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 20:38:55.816868   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 20:38:55.839445   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 20:38:55.867401   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 20:38:55.886462   22579 ssh_runner.go:195] Run: openssl version
	I0528 20:38:55.892252   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 20:38:55.904312   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:38:55.908752   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:38:55.908798   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:38:55.914480   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 20:38:55.925428   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 20:38:55.935860   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 20:38:55.940145   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 20:38:55.940189   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 20:38:55.945611   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 20:38:55.955927   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 20:38:55.966605   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 20:38:55.971093   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 20:38:55.971135   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 20:38:55.976609   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 20:38:55.987317   22579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 20:38:55.991405   22579 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 20:38:55.991462   22579 kubeadm.go:391] StartCluster: {Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:38:55.991550   22579 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 20:38:55.991591   22579 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 20:38:56.031545   22579 cri.go:89] found id: ""
	I0528 20:38:56.031606   22579 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 20:38:56.041726   22579 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 20:38:56.051499   22579 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 20:38:56.060959   22579 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 20:38:56.060979   22579 kubeadm.go:156] found existing configuration files:
	
	I0528 20:38:56.061011   22579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 20:38:56.069989   22579 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 20:38:56.070041   22579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 20:38:56.079293   22579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 20:38:56.088136   22579 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 20:38:56.088181   22579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 20:38:56.097481   22579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 20:38:56.106289   22579 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 20:38:56.106338   22579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 20:38:56.115374   22579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 20:38:56.123980   22579 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 20:38:56.124028   22579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 20:38:56.133084   22579 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 20:38:56.366487   22579 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 20:39:07.836695   22579 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 20:39:07.836768   22579 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 20:39:07.836865   22579 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 20:39:07.836983   22579 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 20:39:07.837059   22579 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 20:39:07.837113   22579 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 20:39:07.838580   22579 out.go:204]   - Generating certificates and keys ...
	I0528 20:39:07.838648   22579 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 20:39:07.838697   22579 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 20:39:07.838755   22579 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 20:39:07.838808   22579 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 20:39:07.838882   22579 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 20:39:07.838932   22579 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 20:39:07.838985   22579 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 20:39:07.839092   22579 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-908878 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0528 20:39:07.839149   22579 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 20:39:07.839246   22579 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-908878 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0528 20:39:07.839334   22579 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 20:39:07.839398   22579 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 20:39:07.839441   22579 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 20:39:07.839488   22579 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 20:39:07.839532   22579 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 20:39:07.839579   22579 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 20:39:07.839633   22579 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 20:39:07.839683   22579 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 20:39:07.839730   22579 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 20:39:07.839799   22579 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 20:39:07.839878   22579 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 20:39:07.841193   22579 out.go:204]   - Booting up control plane ...
	I0528 20:39:07.841281   22579 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 20:39:07.841367   22579 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 20:39:07.841447   22579 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 20:39:07.841549   22579 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 20:39:07.841628   22579 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 20:39:07.841662   22579 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 20:39:07.841787   22579 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 20:39:07.841875   22579 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 20:39:07.841934   22579 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.265431ms
	I0528 20:39:07.842012   22579 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 20:39:07.842071   22579 kubeadm.go:309] [api-check] The API server is healthy after 6.025489101s
	I0528 20:39:07.842204   22579 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 20:39:07.842390   22579 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 20:39:07.842474   22579 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 20:39:07.842705   22579 kubeadm.go:309] [mark-control-plane] Marking the node ha-908878 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 20:39:07.842792   22579 kubeadm.go:309] [bootstrap-token] Using token: yh74jr.5twmrsgoggpczbdk
	I0528 20:39:07.843965   22579 out.go:204]   - Configuring RBAC rules ...
	I0528 20:39:07.844050   22579 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 20:39:07.844154   22579 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 20:39:07.844309   22579 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 20:39:07.844453   22579 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 20:39:07.844570   22579 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 20:39:07.844675   22579 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 20:39:07.844830   22579 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 20:39:07.844891   22579 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 20:39:07.844951   22579 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 20:39:07.844965   22579 kubeadm.go:309] 
	I0528 20:39:07.845031   22579 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 20:39:07.845040   22579 kubeadm.go:309] 
	I0528 20:39:07.845125   22579 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 20:39:07.845131   22579 kubeadm.go:309] 
	I0528 20:39:07.845180   22579 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 20:39:07.845271   22579 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 20:39:07.845353   22579 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 20:39:07.845362   22579 kubeadm.go:309] 
	I0528 20:39:07.845423   22579 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 20:39:07.845429   22579 kubeadm.go:309] 
	I0528 20:39:07.845467   22579 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 20:39:07.845476   22579 kubeadm.go:309] 
	I0528 20:39:07.845522   22579 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 20:39:07.845584   22579 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 20:39:07.845648   22579 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 20:39:07.845656   22579 kubeadm.go:309] 
	I0528 20:39:07.845728   22579 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 20:39:07.845814   22579 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 20:39:07.845821   22579 kubeadm.go:309] 
	I0528 20:39:07.845896   22579 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yh74jr.5twmrsgoggpczbdk \
	I0528 20:39:07.846025   22579 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb \
	I0528 20:39:07.846048   22579 kubeadm.go:309] 	--control-plane 
	I0528 20:39:07.846065   22579 kubeadm.go:309] 
	I0528 20:39:07.846141   22579 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 20:39:07.846151   22579 kubeadm.go:309] 
	I0528 20:39:07.846223   22579 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yh74jr.5twmrsgoggpczbdk \
	I0528 20:39:07.846331   22579 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb 
	I0528 20:39:07.846341   22579 cni.go:84] Creating CNI manager for ""
	I0528 20:39:07.846345   22579 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0528 20:39:07.847684   22579 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0528 20:39:07.848691   22579 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0528 20:39:07.854109   22579 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0528 20:39:07.854122   22579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0528 20:39:07.872861   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0528 20:39:08.334574   22579 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 20:39:08.334700   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:08.334736   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-908878 minikube.k8s.io/updated_at=2024_05_28T20_39_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=ha-908878 minikube.k8s.io/primary=true
	I0528 20:39:08.367772   22579 ops.go:34] apiserver oom_adj: -16
	I0528 20:39:08.507693   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:09.008762   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:09.507970   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:10.008494   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:10.508329   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:11.008607   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:11.507714   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:12.008450   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:12.508428   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:13.008496   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:13.508160   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:14.007992   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:14.508296   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:15.007817   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:15.508668   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:16.008011   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:16.508287   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:17.007921   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:17.508586   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:18.008150   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:18.507863   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:19.007850   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:19.507792   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:20.008765   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:20.508271   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:39:20.597881   22579 kubeadm.go:1107] duration metric: took 12.26324806s to wait for elevateKubeSystemPrivileges
	W0528 20:39:20.597925   22579 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 20:39:20.597934   22579 kubeadm.go:393] duration metric: took 24.606476573s to StartCluster
	I0528 20:39:20.597951   22579 settings.go:142] acquiring lock: {Name:mkc1182212d780ab38539839372c25eb989b2e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:39:20.598029   22579 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:39:20.598869   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:39:20.599107   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0528 20:39:20.599112   22579 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:39:20.599137   22579 start.go:240] waiting for startup goroutines ...
	I0528 20:39:20.599144   22579 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 20:39:20.599219   22579 addons.go:69] Setting storage-provisioner=true in profile "ha-908878"
	I0528 20:39:20.599239   22579 addons.go:69] Setting default-storageclass=true in profile "ha-908878"
	I0528 20:39:20.599253   22579 addons.go:234] Setting addon storage-provisioner=true in "ha-908878"
	I0528 20:39:20.599263   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:39:20.599279   22579 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:39:20.599269   22579 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-908878"
	I0528 20:39:20.599630   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:20.599660   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:20.599662   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:20.599685   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:20.614397   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I0528 20:39:20.614413   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40303
	I0528 20:39:20.614823   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:20.614877   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:20.615282   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:20.615301   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:20.615408   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:20.615433   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:20.615641   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:20.615774   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:20.615946   22579 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:39:20.616182   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:20.616214   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:20.618109   22579 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:39:20.618459   22579 kapi.go:59] client config for ha-908878: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.crt", KeyFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key", CAFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf8220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 20:39:20.618973   22579 cert_rotation.go:137] Starting client certificate rotation controller
	I0528 20:39:20.619180   22579 addons.go:234] Setting addon default-storageclass=true in "ha-908878"
	I0528 20:39:20.619228   22579 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:39:20.619583   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:20.619614   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:20.630882   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
	I0528 20:39:20.631271   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:20.631716   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:20.631732   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:20.632083   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:20.632316   22579 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:39:20.633946   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:39:20.633974   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0528 20:39:20.636312   22579 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 20:39:20.634361   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:20.637705   22579 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 20:39:20.637724   22579 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 20:39:20.637742   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:39:20.638109   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:20.638135   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:20.638472   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:20.639030   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:20.639073   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:20.640850   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:20.641225   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:39:20.641262   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:20.641389   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:39:20.641581   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:39:20.641716   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:39:20.641865   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:39:20.654398   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36505
	I0528 20:39:20.654781   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:20.655238   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:20.655262   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:20.655554   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:20.655753   22579 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:39:20.657284   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:39:20.657482   22579 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 20:39:20.657496   22579 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 20:39:20.657509   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:39:20.660368   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:20.660834   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:39:20.660861   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:20.660947   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:39:20.661127   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:39:20.661288   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:39:20.661464   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:39:20.691834   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0528 20:39:20.777748   22579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 20:39:20.813678   22579 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 20:39:20.977077   22579 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0528 20:39:21.384440   22579 main.go:141] libmachine: Making call to close driver server
	I0528 20:39:21.384468   22579 main.go:141] libmachine: (ha-908878) Calling .Close
	I0528 20:39:21.384470   22579 main.go:141] libmachine: Making call to close driver server
	I0528 20:39:21.384481   22579 main.go:141] libmachine: (ha-908878) Calling .Close
	I0528 20:39:21.384758   22579 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:39:21.384776   22579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:39:21.384785   22579 main.go:141] libmachine: Making call to close driver server
	I0528 20:39:21.384793   22579 main.go:141] libmachine: (ha-908878) Calling .Close
	I0528 20:39:21.384800   22579 main.go:141] libmachine: (ha-908878) DBG | Closing plugin on server side
	I0528 20:39:21.384758   22579 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:39:21.384831   22579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:39:21.384840   22579 main.go:141] libmachine: Making call to close driver server
	I0528 20:39:21.384848   22579 main.go:141] libmachine: (ha-908878) Calling .Close
	I0528 20:39:21.385038   22579 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:39:21.385052   22579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:39:21.385064   22579 main.go:141] libmachine: (ha-908878) DBG | Closing plugin on server side
	I0528 20:39:21.385212   22579 main.go:141] libmachine: (ha-908878) DBG | Closing plugin on server side
	I0528 20:39:21.385234   22579 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:39:21.385247   22579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:39:21.385365   22579 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0528 20:39:21.385412   22579 round_trippers.go:469] Request Headers:
	I0528 20:39:21.385426   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:39:21.385432   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:39:21.398255   22579 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0528 20:39:21.398756   22579 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0528 20:39:21.398769   22579 round_trippers.go:469] Request Headers:
	I0528 20:39:21.398776   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:39:21.398779   22579 round_trippers.go:473]     Content-Type: application/json
	I0528 20:39:21.398782   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:39:21.403303   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:39:21.403439   22579 main.go:141] libmachine: Making call to close driver server
	I0528 20:39:21.403453   22579 main.go:141] libmachine: (ha-908878) Calling .Close
	I0528 20:39:21.403725   22579 main.go:141] libmachine: Successfully made call to close driver server
	I0528 20:39:21.403738   22579 main.go:141] libmachine: (ha-908878) DBG | Closing plugin on server side
	I0528 20:39:21.403744   22579 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 20:39:21.406222   22579 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0528 20:39:21.407409   22579 addons.go:510] duration metric: took 808.260358ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0528 20:39:21.407451   22579 start.go:245] waiting for cluster config update ...
	I0528 20:39:21.407469   22579 start.go:254] writing updated cluster config ...
	I0528 20:39:21.409022   22579 out.go:177] 
	I0528 20:39:21.410317   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:39:21.410381   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:39:21.412048   22579 out.go:177] * Starting "ha-908878-m02" control-plane node in "ha-908878" cluster
	I0528 20:39:21.413146   22579 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:39:21.413164   22579 cache.go:56] Caching tarball of preloaded images
	I0528 20:39:21.413243   22579 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 20:39:21.413255   22579 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 20:39:21.413312   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:39:21.413455   22579 start.go:360] acquireMachinesLock for ha-908878-m02: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 20:39:21.413499   22579 start.go:364] duration metric: took 26.01µs to acquireMachinesLock for "ha-908878-m02"
	I0528 20:39:21.413522   22579 start.go:93] Provisioning new machine with config: &{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:39:21.413616   22579 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0528 20:39:21.415020   22579 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 20:39:21.415087   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:21.415108   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:21.429900   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37587
	I0528 20:39:21.430248   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:21.430757   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:21.430776   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:21.431054   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:21.431263   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetMachineName
	I0528 20:39:21.431412   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:21.431549   22579 start.go:159] libmachine.API.Create for "ha-908878" (driver="kvm2")
	I0528 20:39:21.431575   22579 client.go:168] LocalClient.Create starting
	I0528 20:39:21.431606   22579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem
	I0528 20:39:21.431640   22579 main.go:141] libmachine: Decoding PEM data...
	I0528 20:39:21.431654   22579 main.go:141] libmachine: Parsing certificate...
	I0528 20:39:21.431700   22579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem
	I0528 20:39:21.431717   22579 main.go:141] libmachine: Decoding PEM data...
	I0528 20:39:21.431727   22579 main.go:141] libmachine: Parsing certificate...
	I0528 20:39:21.431747   22579 main.go:141] libmachine: Running pre-create checks...
	I0528 20:39:21.431754   22579 main.go:141] libmachine: (ha-908878-m02) Calling .PreCreateCheck
	I0528 20:39:21.431906   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetConfigRaw
	I0528 20:39:21.432269   22579 main.go:141] libmachine: Creating machine...
	I0528 20:39:21.432284   22579 main.go:141] libmachine: (ha-908878-m02) Calling .Create
	I0528 20:39:21.432407   22579 main.go:141] libmachine: (ha-908878-m02) Creating KVM machine...
	I0528 20:39:21.433443   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found existing default KVM network
	I0528 20:39:21.433607   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found existing private KVM network mk-ha-908878
	I0528 20:39:21.433790   22579 main.go:141] libmachine: (ha-908878-m02) Setting up store path in /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02 ...
	I0528 20:39:21.433816   22579 main.go:141] libmachine: (ha-908878-m02) Building disk image from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 20:39:21.433833   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:21.433728   22978 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:39:21.433933   22579 main.go:141] libmachine: (ha-908878-m02) Downloading /home/jenkins/minikube-integration/18966-3963/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 20:39:21.651560   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:21.651450   22978 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa...
	I0528 20:39:21.796305   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:21.796147   22978 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/ha-908878-m02.rawdisk...
	I0528 20:39:21.796343   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Writing magic tar header
	I0528 20:39:21.796358   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Writing SSH key tar header
	I0528 20:39:21.796479   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:21.796391   22978 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02 ...
	I0528 20:39:21.796538   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02
	I0528 20:39:21.796560   22579 main.go:141] libmachine: (ha-908878-m02) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02 (perms=drwx------)
	I0528 20:39:21.796577   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines
	I0528 20:39:21.796612   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:39:21.796626   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963
	I0528 20:39:21.796638   22579 main.go:141] libmachine: (ha-908878-m02) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines (perms=drwxr-xr-x)
	I0528 20:39:21.796649   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0528 20:39:21.796660   22579 main.go:141] libmachine: (ha-908878-m02) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube (perms=drwxr-xr-x)
	I0528 20:39:21.796668   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home/jenkins
	I0528 20:39:21.796676   22579 main.go:141] libmachine: (ha-908878-m02) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963 (perms=drwxrwxr-x)
	I0528 20:39:21.796687   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Checking permissions on dir: /home
	I0528 20:39:21.796702   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Skipping /home - not owner
	I0528 20:39:21.796718   22579 main.go:141] libmachine: (ha-908878-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0528 20:39:21.796731   22579 main.go:141] libmachine: (ha-908878-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0528 20:39:21.796743   22579 main.go:141] libmachine: (ha-908878-m02) Creating domain...
	I0528 20:39:21.797858   22579 main.go:141] libmachine: (ha-908878-m02) define libvirt domain using xml: 
	I0528 20:39:21.797872   22579 main.go:141] libmachine: (ha-908878-m02) <domain type='kvm'>
	I0528 20:39:21.797879   22579 main.go:141] libmachine: (ha-908878-m02)   <name>ha-908878-m02</name>
	I0528 20:39:21.797884   22579 main.go:141] libmachine: (ha-908878-m02)   <memory unit='MiB'>2200</memory>
	I0528 20:39:21.797889   22579 main.go:141] libmachine: (ha-908878-m02)   <vcpu>2</vcpu>
	I0528 20:39:21.797894   22579 main.go:141] libmachine: (ha-908878-m02)   <features>
	I0528 20:39:21.797899   22579 main.go:141] libmachine: (ha-908878-m02)     <acpi/>
	I0528 20:39:21.797903   22579 main.go:141] libmachine: (ha-908878-m02)     <apic/>
	I0528 20:39:21.797909   22579 main.go:141] libmachine: (ha-908878-m02)     <pae/>
	I0528 20:39:21.797913   22579 main.go:141] libmachine: (ha-908878-m02)     
	I0528 20:39:21.797919   22579 main.go:141] libmachine: (ha-908878-m02)   </features>
	I0528 20:39:21.797926   22579 main.go:141] libmachine: (ha-908878-m02)   <cpu mode='host-passthrough'>
	I0528 20:39:21.797931   22579 main.go:141] libmachine: (ha-908878-m02)   
	I0528 20:39:21.797937   22579 main.go:141] libmachine: (ha-908878-m02)   </cpu>
	I0528 20:39:21.797962   22579 main.go:141] libmachine: (ha-908878-m02)   <os>
	I0528 20:39:21.797988   22579 main.go:141] libmachine: (ha-908878-m02)     <type>hvm</type>
	I0528 20:39:21.797999   22579 main.go:141] libmachine: (ha-908878-m02)     <boot dev='cdrom'/>
	I0528 20:39:21.798010   22579 main.go:141] libmachine: (ha-908878-m02)     <boot dev='hd'/>
	I0528 20:39:21.798019   22579 main.go:141] libmachine: (ha-908878-m02)     <bootmenu enable='no'/>
	I0528 20:39:21.798030   22579 main.go:141] libmachine: (ha-908878-m02)   </os>
	I0528 20:39:21.798038   22579 main.go:141] libmachine: (ha-908878-m02)   <devices>
	I0528 20:39:21.798050   22579 main.go:141] libmachine: (ha-908878-m02)     <disk type='file' device='cdrom'>
	I0528 20:39:21.798063   22579 main.go:141] libmachine: (ha-908878-m02)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/boot2docker.iso'/>
	I0528 20:39:21.798075   22579 main.go:141] libmachine: (ha-908878-m02)       <target dev='hdc' bus='scsi'/>
	I0528 20:39:21.798084   22579 main.go:141] libmachine: (ha-908878-m02)       <readonly/>
	I0528 20:39:21.798100   22579 main.go:141] libmachine: (ha-908878-m02)     </disk>
	I0528 20:39:21.798115   22579 main.go:141] libmachine: (ha-908878-m02)     <disk type='file' device='disk'>
	I0528 20:39:21.798128   22579 main.go:141] libmachine: (ha-908878-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0528 20:39:21.798146   22579 main.go:141] libmachine: (ha-908878-m02)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/ha-908878-m02.rawdisk'/>
	I0528 20:39:21.798158   22579 main.go:141] libmachine: (ha-908878-m02)       <target dev='hda' bus='virtio'/>
	I0528 20:39:21.798182   22579 main.go:141] libmachine: (ha-908878-m02)     </disk>
	I0528 20:39:21.798204   22579 main.go:141] libmachine: (ha-908878-m02)     <interface type='network'>
	I0528 20:39:21.798219   22579 main.go:141] libmachine: (ha-908878-m02)       <source network='mk-ha-908878'/>
	I0528 20:39:21.798236   22579 main.go:141] libmachine: (ha-908878-m02)       <model type='virtio'/>
	I0528 20:39:21.798248   22579 main.go:141] libmachine: (ha-908878-m02)     </interface>
	I0528 20:39:21.798259   22579 main.go:141] libmachine: (ha-908878-m02)     <interface type='network'>
	I0528 20:39:21.798267   22579 main.go:141] libmachine: (ha-908878-m02)       <source network='default'/>
	I0528 20:39:21.798275   22579 main.go:141] libmachine: (ha-908878-m02)       <model type='virtio'/>
	I0528 20:39:21.798285   22579 main.go:141] libmachine: (ha-908878-m02)     </interface>
	I0528 20:39:21.798296   22579 main.go:141] libmachine: (ha-908878-m02)     <serial type='pty'>
	I0528 20:39:21.798318   22579 main.go:141] libmachine: (ha-908878-m02)       <target port='0'/>
	I0528 20:39:21.798337   22579 main.go:141] libmachine: (ha-908878-m02)     </serial>
	I0528 20:39:21.798350   22579 main.go:141] libmachine: (ha-908878-m02)     <console type='pty'>
	I0528 20:39:21.798363   22579 main.go:141] libmachine: (ha-908878-m02)       <target type='serial' port='0'/>
	I0528 20:39:21.798375   22579 main.go:141] libmachine: (ha-908878-m02)     </console>
	I0528 20:39:21.798392   22579 main.go:141] libmachine: (ha-908878-m02)     <rng model='virtio'>
	I0528 20:39:21.798407   22579 main.go:141] libmachine: (ha-908878-m02)       <backend model='random'>/dev/random</backend>
	I0528 20:39:21.798417   22579 main.go:141] libmachine: (ha-908878-m02)     </rng>
	I0528 20:39:21.798425   22579 main.go:141] libmachine: (ha-908878-m02)     
	I0528 20:39:21.798441   22579 main.go:141] libmachine: (ha-908878-m02)     
	I0528 20:39:21.798453   22579 main.go:141] libmachine: (ha-908878-m02)   </devices>
	I0528 20:39:21.798467   22579 main.go:141] libmachine: (ha-908878-m02) </domain>
	I0528 20:39:21.798481   22579 main.go:141] libmachine: (ha-908878-m02) 
	I0528 20:39:21.805065   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:0b:f1:4c in network default
	I0528 20:39:21.805662   22579 main.go:141] libmachine: (ha-908878-m02) Ensuring networks are active...
	I0528 20:39:21.805688   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:21.806449   22579 main.go:141] libmachine: (ha-908878-m02) Ensuring network default is active
	I0528 20:39:21.806884   22579 main.go:141] libmachine: (ha-908878-m02) Ensuring network mk-ha-908878 is active
	I0528 20:39:21.807245   22579 main.go:141] libmachine: (ha-908878-m02) Getting domain xml...
	I0528 20:39:21.808093   22579 main.go:141] libmachine: (ha-908878-m02) Creating domain...
	I0528 20:39:22.994138   22579 main.go:141] libmachine: (ha-908878-m02) Waiting to get IP...
	I0528 20:39:22.994884   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:22.995242   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:22.995300   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:22.995219   22978 retry.go:31] will retry after 236.223184ms: waiting for machine to come up
	I0528 20:39:23.232819   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:23.233218   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:23.233277   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:23.233197   22978 retry.go:31] will retry after 315.81749ms: waiting for machine to come up
	I0528 20:39:23.550722   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:23.551140   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:23.551166   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:23.551081   22978 retry.go:31] will retry after 387.67089ms: waiting for machine to come up
	I0528 20:39:23.940625   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:23.941028   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:23.941079   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:23.941011   22978 retry.go:31] will retry after 586.027605ms: waiting for machine to come up
	I0528 20:39:24.528941   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:24.529437   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:24.529464   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:24.529398   22978 retry.go:31] will retry after 558.346168ms: waiting for machine to come up
	I0528 20:39:25.088820   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:25.089261   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:25.089288   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:25.089229   22978 retry.go:31] will retry after 709.318188ms: waiting for machine to come up
	I0528 20:39:25.800541   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:25.801231   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:25.801256   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:25.801190   22978 retry.go:31] will retry after 727.346159ms: waiting for machine to come up
	I0528 20:39:26.530258   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:26.530750   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:26.530771   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:26.530692   22978 retry.go:31] will retry after 1.245703569s: waiting for machine to come up
	I0528 20:39:27.778331   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:27.778725   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:27.778748   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:27.778680   22978 retry.go:31] will retry after 1.486203146s: waiting for machine to come up
	I0528 20:39:29.267214   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:29.267633   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:29.267655   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:29.267589   22978 retry.go:31] will retry after 1.41229564s: waiting for machine to come up
	I0528 20:39:30.681044   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:30.681465   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:30.681496   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:30.681415   22978 retry.go:31] will retry after 2.449880559s: waiting for machine to come up
	I0528 20:39:33.133397   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:33.133838   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:33.133877   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:33.133803   22978 retry.go:31] will retry after 2.454593184s: waiting for machine to come up
	I0528 20:39:35.590824   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:35.591198   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:35.591220   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:35.591164   22978 retry.go:31] will retry after 4.393795339s: waiting for machine to come up
	I0528 20:39:39.986744   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:39.987158   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find current IP address of domain ha-908878-m02 in network mk-ha-908878
	I0528 20:39:39.987193   22579 main.go:141] libmachine: (ha-908878-m02) DBG | I0528 20:39:39.987105   22978 retry.go:31] will retry after 3.53535555s: waiting for machine to come up
	I0528 20:39:43.525125   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.525616   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has current primary IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.525648   22579 main.go:141] libmachine: (ha-908878-m02) Found IP for machine: 192.168.39.239
	I0528 20:39:43.525672   22579 main.go:141] libmachine: (ha-908878-m02) Reserving static IP address...
	I0528 20:39:43.526027   22579 main.go:141] libmachine: (ha-908878-m02) DBG | unable to find host DHCP lease matching {name: "ha-908878-m02", mac: "52:54:00:b4:bd:28", ip: "192.168.39.239"} in network mk-ha-908878
	I0528 20:39:43.595257   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Getting to WaitForSSH function...
	I0528 20:39:43.595287   22579 main.go:141] libmachine: (ha-908878-m02) Reserved static IP address: 192.168.39.239
	I0528 20:39:43.595306   22579 main.go:141] libmachine: (ha-908878-m02) Waiting for SSH to be available...
	I0528 20:39:43.597568   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.597963   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:43.597992   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.598141   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Using SSH client type: external
	I0528 20:39:43.598168   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa (-rw-------)
	I0528 20:39:43.598198   22579 main.go:141] libmachine: (ha-908878-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 20:39:43.598207   22579 main.go:141] libmachine: (ha-908878-m02) DBG | About to run SSH command:
	I0528 20:39:43.598256   22579 main.go:141] libmachine: (ha-908878-m02) DBG | exit 0
	I0528 20:39:43.721955   22579 main.go:141] libmachine: (ha-908878-m02) DBG | SSH cmd err, output: <nil>: 
	I0528 20:39:43.722226   22579 main.go:141] libmachine: (ha-908878-m02) KVM machine creation complete!
	I0528 20:39:43.722619   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetConfigRaw
	I0528 20:39:43.723230   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:43.723435   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:43.723579   22579 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0528 20:39:43.723597   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetState
	I0528 20:39:43.725144   22579 main.go:141] libmachine: Detecting operating system of created instance...
	I0528 20:39:43.725194   22579 main.go:141] libmachine: Waiting for SSH to be available...
	I0528 20:39:43.725210   22579 main.go:141] libmachine: Getting to WaitForSSH function...
	I0528 20:39:43.725222   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:43.727491   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.727810   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:43.727833   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.727949   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:43.728111   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:43.728269   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:43.728388   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:43.728528   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:39:43.728719   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0528 20:39:43.728730   22579 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0528 20:39:43.828757   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:39:43.828777   22579 main.go:141] libmachine: Detecting the provisioner...
	I0528 20:39:43.828784   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:43.831460   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.831804   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:43.831830   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.831937   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:43.832131   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:43.832315   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:43.832471   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:43.832653   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:39:43.832802   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0528 20:39:43.832812   22579 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0528 20:39:43.934676   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0528 20:39:43.934746   22579 main.go:141] libmachine: found compatible host: buildroot
	I0528 20:39:43.934760   22579 main.go:141] libmachine: Provisioning with buildroot...
	I0528 20:39:43.934772   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetMachineName
	I0528 20:39:43.935019   22579 buildroot.go:166] provisioning hostname "ha-908878-m02"
	I0528 20:39:43.935042   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetMachineName
	I0528 20:39:43.935200   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:43.937676   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.937997   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:43.938028   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:43.938141   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:43.938335   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:43.938484   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:43.938636   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:43.938801   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:39:43.939009   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0528 20:39:43.939022   22579 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-908878-m02 && echo "ha-908878-m02" | sudo tee /etc/hostname
	I0528 20:39:44.056989   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-908878-m02
	
	I0528 20:39:44.057024   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.059725   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.060086   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.060114   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.060270   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.060431   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.060580   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.060743   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.060929   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:39:44.061103   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0528 20:39:44.061126   22579 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-908878-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-908878-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-908878-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 20:39:44.172823   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:39:44.172854   22579 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 20:39:44.172872   22579 buildroot.go:174] setting up certificates
	I0528 20:39:44.172884   22579 provision.go:84] configureAuth start
	I0528 20:39:44.172898   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetMachineName
	I0528 20:39:44.173203   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:39:44.175787   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.176184   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.176210   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.176376   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.178910   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.179269   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.179293   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.179411   22579 provision.go:143] copyHostCerts
	I0528 20:39:44.179444   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:39:44.179485   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 20:39:44.179496   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:39:44.179581   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 20:39:44.179667   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:39:44.179691   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 20:39:44.179698   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:39:44.179741   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 20:39:44.179833   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:39:44.179858   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 20:39:44.179864   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:39:44.179904   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 20:39:44.179969   22579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.ha-908878-m02 san=[127.0.0.1 192.168.39.239 ha-908878-m02 localhost minikube]
	I0528 20:39:44.294298   22579 provision.go:177] copyRemoteCerts
	I0528 20:39:44.294358   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 20:39:44.294386   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.297020   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.297346   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.297374   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.297539   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.297731   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.297887   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.298017   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	I0528 20:39:44.379975   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0528 20:39:44.380050   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 20:39:44.403551   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0528 20:39:44.403610   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 20:39:44.426107   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0528 20:39:44.426156   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 20:39:44.448590   22579 provision.go:87] duration metric: took 275.694841ms to configureAuth
	I0528 20:39:44.448611   22579 buildroot.go:189] setting minikube options for container-runtime
	I0528 20:39:44.448776   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:39:44.448836   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.451296   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.451597   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.451616   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.451810   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.452002   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.452165   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.452323   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.452459   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:39:44.452620   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0528 20:39:44.452641   22579 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 20:39:44.718253   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 20:39:44.718288   22579 main.go:141] libmachine: Checking connection to Docker...
	I0528 20:39:44.718297   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetURL
	I0528 20:39:44.719624   22579 main.go:141] libmachine: (ha-908878-m02) DBG | Using libvirt version 6000000
	I0528 20:39:44.721831   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.722136   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.722157   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.722325   22579 main.go:141] libmachine: Docker is up and running!
	I0528 20:39:44.722345   22579 main.go:141] libmachine: Reticulating splines...
	I0528 20:39:44.722352   22579 client.go:171] duration metric: took 23.290767933s to LocalClient.Create
	I0528 20:39:44.722377   22579 start.go:167] duration metric: took 23.290828842s to libmachine.API.Create "ha-908878"
	I0528 20:39:44.722388   22579 start.go:293] postStartSetup for "ha-908878-m02" (driver="kvm2")
	I0528 20:39:44.722397   22579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 20:39:44.722412   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:44.722616   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 20:39:44.722640   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.724676   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.725039   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.725064   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.725193   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.725344   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.725493   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.725603   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	I0528 20:39:44.804629   22579 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 20:39:44.808851   22579 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 20:39:44.808874   22579 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 20:39:44.808942   22579 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 20:39:44.809096   22579 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 20:39:44.809120   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /etc/ssl/certs/117602.pem
	I0528 20:39:44.809272   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 20:39:44.819237   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:39:44.842672   22579 start.go:296] duration metric: took 120.272701ms for postStartSetup
	I0528 20:39:44.842716   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetConfigRaw
	I0528 20:39:44.843241   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:39:44.845666   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.846038   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.846058   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.846224   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:39:44.846399   22579 start.go:128] duration metric: took 23.432774452s to createHost
	I0528 20:39:44.846419   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.848699   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.849056   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.849072   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.849201   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.849377   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.849515   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.849620   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.849743   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:39:44.849917   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0528 20:39:44.849928   22579 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 20:39:44.951339   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716928784.926939138
	
	I0528 20:39:44.951359   22579 fix.go:216] guest clock: 1716928784.926939138
	I0528 20:39:44.951378   22579 fix.go:229] Guest: 2024-05-28 20:39:44.926939138 +0000 UTC Remote: 2024-05-28 20:39:44.846410206 +0000 UTC m=+76.370697906 (delta=80.528932ms)
	I0528 20:39:44.951409   22579 fix.go:200] guest clock delta is within tolerance: 80.528932ms
	I0528 20:39:44.951416   22579 start.go:83] releasing machines lock for "ha-908878-m02", held for 23.537904811s
	I0528 20:39:44.951434   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:44.951692   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:39:44.954325   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.954702   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.954724   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.956651   22579 out.go:177] * Found network options:
	I0528 20:39:44.958031   22579 out.go:177]   - NO_PROXY=192.168.39.100
	W0528 20:39:44.959255   22579 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 20:39:44.959295   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:44.959786   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:44.959957   22579 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:39:44.960029   22579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 20:39:44.960075   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	W0528 20:39:44.960148   22579 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 20:39:44.960221   22579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 20:39:44.960242   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:39:44.962556   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.962879   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.962909   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.962929   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.963063   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.963233   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.963371   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:44.963394   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.963393   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:44.963504   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:39:44.963569   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	I0528 20:39:44.963622   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:39:44.963717   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:39:44.963851   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	I0528 20:39:45.191050   22579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 20:39:45.197540   22579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 20:39:45.197614   22579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 20:39:45.213549   22579 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 20:39:45.213573   22579 start.go:494] detecting cgroup driver to use...
	I0528 20:39:45.213632   22579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 20:39:45.229419   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 20:39:45.243034   22579 docker.go:217] disabling cri-docker service (if available) ...
	I0528 20:39:45.243096   22579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 20:39:45.256232   22579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 20:39:45.269876   22579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 20:39:45.388677   22579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 20:39:45.532176   22579 docker.go:233] disabling docker service ...
	I0528 20:39:45.532248   22579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 20:39:45.547274   22579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 20:39:45.559583   22579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 20:39:45.693293   22579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 20:39:45.828110   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 20:39:45.844272   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 20:39:45.862898   22579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 20:39:45.862963   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.872981   22579 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 20:39:45.873042   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.882982   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.892793   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.902631   22579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 20:39:45.912838   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.922547   22579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.939496   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:39:45.949578   22579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 20:39:45.958529   22579 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 20:39:45.958578   22579 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 20:39:45.971321   22579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 20:39:45.980291   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:39:46.096244   22579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 20:39:46.234036   22579 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 20:39:46.234107   22579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 20:39:46.239030   22579 start.go:562] Will wait 60s for crictl version
	I0528 20:39:46.239075   22579 ssh_runner.go:195] Run: which crictl
	I0528 20:39:46.242841   22579 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 20:39:46.284071   22579 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 20:39:46.284155   22579 ssh_runner.go:195] Run: crio --version
	I0528 20:39:46.311989   22579 ssh_runner.go:195] Run: crio --version
	I0528 20:39:46.344750   22579 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 20:39:46.346078   22579 out.go:177]   - env NO_PROXY=192.168.39.100
	I0528 20:39:46.347390   22579 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:39:46.350120   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:46.350476   22579 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:39:35 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:39:46.350500   22579 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:39:46.350656   22579 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 20:39:46.354730   22579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:39:46.366962   22579 mustload.go:65] Loading cluster: ha-908878
	I0528 20:39:46.367142   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:39:46.367396   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:46.367427   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:46.382472   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33255
	I0528 20:39:46.382858   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:46.383291   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:46.383311   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:46.383606   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:46.383785   22579 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:39:46.385324   22579 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:39:46.385658   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:46.385689   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:46.399803   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42729
	I0528 20:39:46.400242   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:46.400660   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:46.400680   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:46.400973   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:46.401158   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:39:46.401309   22579 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878 for IP: 192.168.39.239
	I0528 20:39:46.401319   22579 certs.go:194] generating shared ca certs ...
	I0528 20:39:46.401332   22579 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:39:46.401442   22579 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 20:39:46.401476   22579 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 20:39:46.401485   22579 certs.go:256] generating profile certs ...
	I0528 20:39:46.401544   22579 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key
	I0528 20:39:46.401568   22579 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.d42c2f8b
	I0528 20:39:46.401581   22579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.d42c2f8b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.239 192.168.39.254]
	I0528 20:39:46.532027   22579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.d42c2f8b ...
	I0528 20:39:46.532054   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.d42c2f8b: {Name:mk5230ac00b5ed8d9e975e2641c42648f309e058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:39:46.532238   22579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.d42c2f8b ...
	I0528 20:39:46.532258   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.d42c2f8b: {Name:mk7d4a0cf0ce90f7f8946c2980e1db3d0d9e0d90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:39:46.532356   22579 certs.go:381] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.d42c2f8b -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt
	I0528 20:39:46.532490   22579 certs.go:385] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.d42c2f8b -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key
	I0528 20:39:46.532608   22579 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key
	I0528 20:39:46.532622   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 20:39:46.532634   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0528 20:39:46.532645   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 20:39:46.532658   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 20:39:46.532670   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 20:39:46.532679   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 20:39:46.532689   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 20:39:46.532697   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 20:39:46.532746   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 20:39:46.532771   22579 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 20:39:46.532782   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 20:39:46.532814   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 20:39:46.532848   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 20:39:46.532877   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 20:39:46.532933   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:39:46.532972   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:39:46.532993   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem -> /usr/share/ca-certificates/11760.pem
	I0528 20:39:46.533006   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /usr/share/ca-certificates/117602.pem
	I0528 20:39:46.533038   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:39:46.535807   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:46.536152   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:39:46.536181   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:46.536309   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:39:46.536490   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:39:46.536657   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:39:46.536781   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:39:46.610132   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0528 20:39:46.615159   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0528 20:39:46.626019   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0528 20:39:46.630287   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0528 20:39:46.641191   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0528 20:39:46.645573   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0528 20:39:46.655284   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0528 20:39:46.659505   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0528 20:39:46.669628   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0528 20:39:46.673931   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0528 20:39:46.684107   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0528 20:39:46.688245   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0528 20:39:46.698832   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 20:39:46.725261   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 20:39:46.750358   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 20:39:46.774000   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 20:39:46.797471   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0528 20:39:46.820832   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 20:39:46.844123   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 20:39:46.866834   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 20:39:46.889813   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 20:39:46.913363   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 20:39:46.937387   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 20:39:46.961455   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0528 20:39:46.977450   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0528 20:39:46.993498   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0528 20:39:47.009273   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0528 20:39:47.025204   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0528 20:39:47.043162   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0528 20:39:47.061042   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0528 20:39:47.076891   22579 ssh_runner.go:195] Run: openssl version
	I0528 20:39:47.082486   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 20:39:47.092423   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:39:47.096733   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:39:47.096777   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:39:47.102259   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 20:39:47.112355   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 20:39:47.122538   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 20:39:47.126830   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 20:39:47.126886   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 20:39:47.132503   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 20:39:47.143222   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 20:39:47.154480   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 20:39:47.159114   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 20:39:47.159167   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 20:39:47.164874   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 20:39:47.177374   22579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 20:39:47.181611   22579 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 20:39:47.181662   22579 kubeadm.go:928] updating node {m02 192.168.39.239 8443 v1.30.1 crio true true} ...
	I0528 20:39:47.181750   22579 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-908878-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 20:39:47.181801   22579 kube-vip.go:115] generating kube-vip config ...
	I0528 20:39:47.181841   22579 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 20:39:47.198400   22579 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 20:39:47.198460   22579 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0528 20:39:47.198505   22579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 20:39:47.207958   22579 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0528 20:39:47.208011   22579 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0528 20:39:47.217671   22579 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0528 20:39:47.217699   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 20:39:47.217779   22579 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0528 20:39:47.217790   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 20:39:47.217810   22579 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0528 20:39:47.222049   22579 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0528 20:39:47.222071   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0528 20:39:48.311013   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 20:39:48.311112   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 20:39:48.317311   22579 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0528 20:39:48.317361   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0528 20:39:48.705592   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:39:48.720396   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 20:39:48.720483   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 20:39:48.724928   22579 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0528 20:39:48.724952   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0528 20:39:49.138641   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0528 20:39:49.148692   22579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0528 20:39:49.164623   22579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 20:39:49.179922   22579 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0528 20:39:49.196215   22579 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0528 20:39:49.199952   22579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:39:49.212984   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:39:49.342900   22579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:39:49.359935   22579 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:39:49.360416   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:39:49.360472   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:39:49.375579   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40659
	I0528 20:39:49.376059   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:39:49.376504   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:39:49.376526   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:39:49.376863   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:39:49.377123   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:39:49.377295   22579 start.go:316] joinCluster: &{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:39:49.377389   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0528 20:39:49.377413   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:39:49.380241   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:49.380644   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:39:49.380672   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:39:49.380804   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:39:49.380994   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:39:49.381127   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:39:49.381277   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:39:49.527440   22579 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:39:49.527480   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x9w3sn.kua0mpya9sje97dw --discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-908878-m02 --control-plane --apiserver-advertise-address=192.168.39.239 --apiserver-bind-port=8443"
	I0528 20:40:11.273238   22579 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x9w3sn.kua0mpya9sje97dw --discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-908878-m02 --control-plane --apiserver-advertise-address=192.168.39.239 --apiserver-bind-port=8443": (21.74572677s)
	I0528 20:40:11.273280   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0528 20:40:11.742426   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-908878-m02 minikube.k8s.io/updated_at=2024_05_28T20_40_11_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=ha-908878 minikube.k8s.io/primary=false
	I0528 20:40:11.874237   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-908878-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0528 20:40:11.982143   22579 start.go:318] duration metric: took 22.604844073s to joinCluster
	I0528 20:40:11.982217   22579 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:40:11.983388   22579 out.go:177] * Verifying Kubernetes components...
	I0528 20:40:11.982510   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:40:11.984848   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:40:12.282196   22579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:40:12.356715   22579 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:40:12.357043   22579 kapi.go:59] client config for ha-908878: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.crt", KeyFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key", CAFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf8220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0528 20:40:12.357103   22579 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.100:8443
	I0528 20:40:12.357362   22579 node_ready.go:35] waiting up to 6m0s for node "ha-908878-m02" to be "Ready" ...
	I0528 20:40:12.357456   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:12.357466   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:12.357476   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:12.357481   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:12.367427   22579 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0528 20:40:12.858397   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:12.858419   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:12.858428   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:12.858432   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:12.862303   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:13.358491   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:13.358514   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:13.358521   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:13.358524   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:13.361452   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:13.858081   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:13.858106   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:13.858116   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:13.858121   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:13.863341   22579 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 20:40:14.357549   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:14.357570   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:14.357577   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:14.357582   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:14.360390   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:14.360996   22579 node_ready.go:53] node "ha-908878-m02" has status "Ready":"False"
	I0528 20:40:14.857913   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:14.857933   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:14.857941   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:14.857946   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:14.860547   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:15.357577   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:15.357599   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:15.357607   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:15.357612   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:15.361031   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:15.858180   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:15.858201   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:15.858212   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:15.858219   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:15.860946   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:16.357955   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:16.357981   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:16.357990   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:16.357995   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:16.361851   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:16.362569   22579 node_ready.go:53] node "ha-908878-m02" has status "Ready":"False"
	I0528 20:40:16.858218   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:16.858246   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:16.858258   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:16.858265   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:16.861523   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:17.357611   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:17.357641   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:17.357652   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:17.357657   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:17.361228   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:17.858310   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:17.858332   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:17.858341   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:17.858346   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:17.861833   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:18.357663   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:18.357687   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:18.357696   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:18.357701   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:18.360946   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:18.857606   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:18.857625   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:18.857633   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:18.857636   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:18.860654   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:18.861375   22579 node_ready.go:53] node "ha-908878-m02" has status "Ready":"False"
	I0528 20:40:19.357640   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:19.357667   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.357679   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.357684   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.360599   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.858085   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:19.858107   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.858114   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.858117   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.861490   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:19.862265   22579 node_ready.go:49] node "ha-908878-m02" has status "Ready":"True"
	I0528 20:40:19.862303   22579 node_ready.go:38] duration metric: took 7.504907421s for node "ha-908878-m02" to be "Ready" ...
	I0528 20:40:19.862314   22579 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:40:19.862372   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:40:19.862383   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.862393   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.862401   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.868588   22579 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0528 20:40:19.876604   22579 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5fmns" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.876682   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5fmns
	I0528 20:40:19.876694   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.876701   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.876707   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.879285   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.879865   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:19.879880   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.879887   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.879890   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.882172   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.882699   22579 pod_ready.go:92] pod "coredns-7db6d8ff4d-5fmns" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:19.882717   22579 pod_ready.go:81] duration metric: took 6.090072ms for pod "coredns-7db6d8ff4d-5fmns" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.882727   22579 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mvx67" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.882818   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mvx67
	I0528 20:40:19.882830   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.882840   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.882846   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.885132   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.885668   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:19.885681   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.885687   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.885692   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.888785   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:19.889868   22579 pod_ready.go:92] pod "coredns-7db6d8ff4d-mvx67" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:19.889886   22579 pod_ready.go:81] duration metric: took 7.150945ms for pod "coredns-7db6d8ff4d-mvx67" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.889896   22579 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.889949   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878
	I0528 20:40:19.889961   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.889969   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.889974   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.892607   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.893158   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:19.893170   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.893176   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.893178   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.895416   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.895980   22579 pod_ready.go:92] pod "etcd-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:19.895995   22579 pod_ready.go:81] duration metric: took 6.092752ms for pod "etcd-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.896002   22579 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:19.896052   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:19.896063   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.896073   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.896081   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.898796   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:19.899295   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:19.899307   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:19.899314   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:19.899318   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:19.901893   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:20.396912   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:20.396935   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:20.396947   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:20.396951   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:20.399862   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:20.400491   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:20.400507   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:20.400514   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:20.400518   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:20.402904   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:20.897130   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:20.897152   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:20.897159   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:20.897162   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:20.900991   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:20.902387   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:20.902400   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:20.902407   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:20.902411   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:20.905660   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:21.397166   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:21.397185   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:21.397192   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:21.397196   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:21.400647   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:21.401700   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:21.401715   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:21.401724   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:21.401729   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:21.404362   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:21.897123   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:21.897145   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:21.897154   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:21.897163   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:21.900081   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:21.900724   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:21.900738   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:21.900747   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:21.900752   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:21.903031   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:21.903517   22579 pod_ready.go:102] pod "etcd-ha-908878-m02" in "kube-system" namespace has status "Ready":"False"
	I0528 20:40:22.396851   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:22.396873   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:22.396881   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:22.396886   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:22.399923   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:22.400771   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:22.400785   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:22.400792   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:22.400796   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:22.404167   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:22.896542   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:22.896564   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:22.896582   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:22.896587   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:22.899781   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:22.900729   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:22.900742   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:22.900750   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:22.900754   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:22.903452   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:23.396319   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:23.396345   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:23.396353   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:23.396357   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:23.399657   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:23.400305   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:23.400318   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:23.400325   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:23.400328   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:23.403206   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:23.896164   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:23.896186   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:23.896194   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:23.896198   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:23.899319   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:23.900141   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:23.900158   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:23.900168   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:23.900172   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:23.902648   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:24.397151   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:24.397171   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:24.397179   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:24.397184   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:24.400324   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:24.400944   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:24.400960   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:24.400967   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:24.400971   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:24.403693   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:24.404232   22579 pod_ready.go:102] pod "etcd-ha-908878-m02" in "kube-system" namespace has status "Ready":"False"
	I0528 20:40:24.896515   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:24.896539   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:24.896545   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:24.896549   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:24.900241   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:24.901072   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:24.901088   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:24.901097   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:24.901104   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:24.903768   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:25.396925   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:25.396948   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:25.396961   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:25.396968   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:25.399876   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:25.400663   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:25.400679   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:25.400689   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:25.400694   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:25.403298   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:25.896184   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:25.896207   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:25.896215   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:25.896220   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:25.899491   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:25.900143   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:25.900158   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:25.900166   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:25.900171   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:25.902799   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:26.396296   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:26.396315   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:26.396322   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:26.396327   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:26.399795   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:26.400376   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:26.400389   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:26.400397   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:26.400400   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:26.402974   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:26.896709   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:26.896730   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:26.896738   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:26.896744   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:26.899957   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:26.900709   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:26.900724   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:26.900731   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:26.900735   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:26.904001   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:26.905064   22579 pod_ready.go:102] pod "etcd-ha-908878-m02" in "kube-system" namespace has status "Ready":"False"
	I0528 20:40:27.396489   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:27.396510   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:27.396518   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:27.396522   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:27.401472   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:40:27.402071   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:27.402087   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:27.402094   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:27.402099   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:27.404615   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:27.896445   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:27.896470   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:27.896480   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:27.896487   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:27.899580   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:27.900404   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:27.900420   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:27.900428   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:27.900433   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:27.902851   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:28.396998   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:28.397031   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:28.397043   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:28.397048   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:28.400447   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:28.401281   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:28.401296   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:28.401305   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:28.401309   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:28.404064   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:28.896987   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:28.897012   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:28.897022   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:28.897033   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:28.900986   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:28.901922   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:28.901935   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:28.901942   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:28.901945   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:28.904836   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:28.905437   22579 pod_ready.go:102] pod "etcd-ha-908878-m02" in "kube-system" namespace has status "Ready":"False"
	I0528 20:40:29.396351   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:29.396371   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.396379   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.396383   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.399460   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:29.400020   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:29.400034   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.400041   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.400044   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.402556   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.896484   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:40:29.896519   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.896526   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.896530   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.899524   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.900201   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:29.900216   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.900222   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.900228   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.902578   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.903194   22579 pod_ready.go:92] pod "etcd-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:29.903215   22579 pod_ready.go:81] duration metric: took 10.007205948s for pod "etcd-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.903233   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.903288   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-908878
	I0528 20:40:29.903291   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.903298   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.903302   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.905453   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.906183   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:29.906200   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.906210   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.906221   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.908470   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.909003   22579 pod_ready.go:92] pod "kube-apiserver-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:29.909021   22579 pod_ready.go:81] duration metric: took 5.781531ms for pod "kube-apiserver-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.909029   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.909072   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-908878-m02
	I0528 20:40:29.909079   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.909086   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.909094   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.911338   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.911924   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:29.911937   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.911944   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.911948   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.913819   22579 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0528 20:40:29.914272   22579 pod_ready.go:92] pod "kube-apiserver-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:29.914287   22579 pod_ready.go:81] duration metric: took 5.252021ms for pod "kube-apiserver-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.914295   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.914342   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878
	I0528 20:40:29.914351   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.914357   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.914361   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.917445   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:29.918464   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:29.918479   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.918487   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.918493   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.920744   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.921259   22579 pod_ready.go:92] pod "kube-controller-manager-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:29.921275   22579 pod_ready.go:81] duration metric: took 6.973107ms for pod "kube-controller-manager-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.921282   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.921319   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878-m02
	I0528 20:40:29.921326   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.921332   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.921338   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.923660   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.924192   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:29.924207   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:29.924214   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:29.924219   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:29.926370   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:29.926754   22579 pod_ready.go:92] pod "kube-controller-manager-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:29.926766   22579 pod_ready.go:81] duration metric: took 5.478491ms for pod "kube-controller-manager-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:29.926773   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ng8mq" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:30.097135   22579 request.go:629] Waited for 170.31592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ng8mq
	I0528 20:40:30.097186   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ng8mq
	I0528 20:40:30.097191   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:30.097198   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:30.097204   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:30.100581   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:30.297511   22579 request.go:629] Waited for 196.357126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:30.297569   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:30.297574   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:30.297581   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:30.297597   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:30.301046   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:30.301833   22579 pod_ready.go:92] pod "kube-proxy-ng8mq" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:30.301850   22579 pod_ready.go:81] duration metric: took 375.071009ms for pod "kube-proxy-ng8mq" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:30.301861   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pg89k" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:30.497059   22579 request.go:629] Waited for 195.119235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pg89k
	I0528 20:40:30.497120   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pg89k
	I0528 20:40:30.497127   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:30.497137   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:30.497146   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:30.500175   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:30.697193   22579 request.go:629] Waited for 195.998479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:30.697246   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:30.697251   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:30.697257   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:30.697261   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:30.700322   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:30.700798   22579 pod_ready.go:92] pod "kube-proxy-pg89k" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:30.700813   22579 pod_ready.go:81] duration metric: took 398.943236ms for pod "kube-proxy-pg89k" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:30.700821   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:30.896880   22579 request.go:629] Waited for 195.997769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878
	I0528 20:40:30.896957   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878
	I0528 20:40:30.896966   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:30.896976   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:30.897004   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:30.900173   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:31.097368   22579 request.go:629] Waited for 196.340666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:31.097417   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:40:31.097423   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:31.097436   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:31.097442   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:31.101138   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:31.101882   22579 pod_ready.go:92] pod "kube-scheduler-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:31.101905   22579 pod_ready.go:81] duration metric: took 401.07596ms for pod "kube-scheduler-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:31.101917   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:31.296878   22579 request.go:629] Waited for 194.881731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878-m02
	I0528 20:40:31.296951   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878-m02
	I0528 20:40:31.296959   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:31.296970   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:31.296980   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:31.299929   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:40:31.496970   22579 request.go:629] Waited for 196.357718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:31.497029   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:40:31.497036   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:31.497047   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:31.497051   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:31.500188   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:31.501110   22579 pod_ready.go:92] pod "kube-scheduler-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:40:31.501131   22579 pod_ready.go:81] duration metric: took 399.206587ms for pod "kube-scheduler-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:40:31.501144   22579 pod_ready.go:38] duration metric: took 11.638817981s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:40:31.501161   22579 api_server.go:52] waiting for apiserver process to appear ...
	I0528 20:40:31.501233   22579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:40:31.520498   22579 api_server.go:72] duration metric: took 19.538238682s to wait for apiserver process to appear ...
	I0528 20:40:31.520523   22579 api_server.go:88] waiting for apiserver healthz status ...
	I0528 20:40:31.520543   22579 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0528 20:40:31.526513   22579 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0528 20:40:31.526569   22579 round_trippers.go:463] GET https://192.168.39.100:8443/version
	I0528 20:40:31.526573   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:31.526581   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:31.526585   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:31.527447   22579 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 20:40:31.527535   22579 api_server.go:141] control plane version: v1.30.1
	I0528 20:40:31.527550   22579 api_server.go:131] duration metric: took 7.02174ms to wait for apiserver health ...
	I0528 20:40:31.527557   22579 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 20:40:31.696971   22579 request.go:629] Waited for 169.332456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:40:31.697036   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:40:31.697043   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:31.697054   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:31.697064   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:31.702231   22579 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 20:40:31.706998   22579 system_pods.go:59] 17 kube-system pods found
	I0528 20:40:31.707031   22579 system_pods.go:61] "coredns-7db6d8ff4d-5fmns" [41a3bda1-29ba-4982-baf5-0adc97b4eb45] Running
	I0528 20:40:31.707037   22579 system_pods.go:61] "coredns-7db6d8ff4d-mvx67" [0b51beb7-0397-4008-b878-97edd41c6b94] Running
	I0528 20:40:31.707040   22579 system_pods.go:61] "etcd-ha-908878" [4cfaba35-0bd9-476b-95c2-abd111c4fcac] Running
	I0528 20:40:31.707044   22579 system_pods.go:61] "etcd-ha-908878-m02" [cb4f24be-dbf9-4c42-9a55-29cf6f0b6ecc] Running
	I0528 20:40:31.707047   22579 system_pods.go:61] "kindnet-6prxw" [77fae8b9-3abd-4a39-81ec-cc782b891331] Running
	I0528 20:40:31.707050   22579 system_pods.go:61] "kindnet-x4mzh" [8069a7ea-0ab1-4064-b982-867dbdfd97aa] Running
	I0528 20:40:31.707053   22579 system_pods.go:61] "kube-apiserver-ha-908878" [ff63f2af-3fc5-496c-b468-7447defad5e6] Running
	I0528 20:40:31.707056   22579 system_pods.go:61] "kube-apiserver-ha-908878-m02" [3a56592b-67cd-44d0-8907-2a62d4a6c671] Running
	I0528 20:40:31.707059   22579 system_pods.go:61] "kube-controller-manager-ha-908878" [e426060f-307d-41c7-8fb9-ab48709ce2a8] Running
	I0528 20:40:31.707062   22579 system_pods.go:61] "kube-controller-manager-ha-908878-m02" [232c3f41-5ba8-4fdf-848a-f8fb92f33a73] Running
	I0528 20:40:31.707065   22579 system_pods.go:61] "kube-proxy-ng8mq" [ca0b1264-09c7-44b2-ba8c-e145e825fdbe] Running
	I0528 20:40:31.707068   22579 system_pods.go:61] "kube-proxy-pg89k" [6eeda2cd-7b9e-440f-a8c3-c2ea8015106d] Running
	I0528 20:40:31.707072   22579 system_pods.go:61] "kube-scheduler-ha-908878" [7a9859a9-e92c-435b-a70e-5200f67d9589] Running
	I0528 20:40:31.707078   22579 system_pods.go:61] "kube-scheduler-ha-908878-m02" [c03b5557-cdca-4d39-800e-51a3a4f180b7] Running
	I0528 20:40:31.707081   22579 system_pods.go:61] "kube-vip-ha-908878" [45e2e6e2-6ea1-4498-aca1-9c3060eb9ca4] Running
	I0528 20:40:31.707084   22579 system_pods.go:61] "kube-vip-ha-908878-m02" [bcbc54fb-d0d4-422a-9e42-d61cd3f390ff] Running
	I0528 20:40:31.707089   22579 system_pods.go:61] "storage-provisioner" [d79872e2-b267-446a-99dc-5bf9f398d31c] Running
	I0528 20:40:31.707096   22579 system_pods.go:74] duration metric: took 179.532945ms to wait for pod list to return data ...
	I0528 20:40:31.707107   22579 default_sa.go:34] waiting for default service account to be created ...
	I0528 20:40:31.897544   22579 request.go:629] Waited for 190.352879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0528 20:40:31.897618   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0528 20:40:31.897623   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:31.897630   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:31.897636   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:31.901501   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:31.901704   22579 default_sa.go:45] found service account: "default"
	I0528 20:40:31.901720   22579 default_sa.go:55] duration metric: took 194.607645ms for default service account to be created ...
	I0528 20:40:31.901727   22579 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 20:40:32.097169   22579 request.go:629] Waited for 195.374316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:40:32.097219   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:40:32.097224   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:32.097231   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:32.097256   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:32.102508   22579 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 20:40:32.109708   22579 system_pods.go:86] 17 kube-system pods found
	I0528 20:40:32.110148   22579 system_pods.go:89] "coredns-7db6d8ff4d-5fmns" [41a3bda1-29ba-4982-baf5-0adc97b4eb45] Running
	I0528 20:40:32.110186   22579 system_pods.go:89] "coredns-7db6d8ff4d-mvx67" [0b51beb7-0397-4008-b878-97edd41c6b94] Running
	I0528 20:40:32.110194   22579 system_pods.go:89] "etcd-ha-908878" [4cfaba35-0bd9-476b-95c2-abd111c4fcac] Running
	I0528 20:40:32.110201   22579 system_pods.go:89] "etcd-ha-908878-m02" [cb4f24be-dbf9-4c42-9a55-29cf6f0b6ecc] Running
	I0528 20:40:32.110208   22579 system_pods.go:89] "kindnet-6prxw" [77fae8b9-3abd-4a39-81ec-cc782b891331] Running
	I0528 20:40:32.110213   22579 system_pods.go:89] "kindnet-x4mzh" [8069a7ea-0ab1-4064-b982-867dbdfd97aa] Running
	I0528 20:40:32.110220   22579 system_pods.go:89] "kube-apiserver-ha-908878" [ff63f2af-3fc5-496c-b468-7447defad5e6] Running
	I0528 20:40:32.110227   22579 system_pods.go:89] "kube-apiserver-ha-908878-m02" [3a56592b-67cd-44d0-8907-2a62d4a6c671] Running
	I0528 20:40:32.110234   22579 system_pods.go:89] "kube-controller-manager-ha-908878" [e426060f-307d-41c7-8fb9-ab48709ce2a8] Running
	I0528 20:40:32.110244   22579 system_pods.go:89] "kube-controller-manager-ha-908878-m02" [232c3f41-5ba8-4fdf-848a-f8fb92f33a73] Running
	I0528 20:40:32.110253   22579 system_pods.go:89] "kube-proxy-ng8mq" [ca0b1264-09c7-44b2-ba8c-e145e825fdbe] Running
	I0528 20:40:32.110258   22579 system_pods.go:89] "kube-proxy-pg89k" [6eeda2cd-7b9e-440f-a8c3-c2ea8015106d] Running
	I0528 20:40:32.110264   22579 system_pods.go:89] "kube-scheduler-ha-908878" [7a9859a9-e92c-435b-a70e-5200f67d9589] Running
	I0528 20:40:32.110271   22579 system_pods.go:89] "kube-scheduler-ha-908878-m02" [c03b5557-cdca-4d39-800e-51a3a4f180b7] Running
	I0528 20:40:32.110276   22579 system_pods.go:89] "kube-vip-ha-908878" [45e2e6e2-6ea1-4498-aca1-9c3060eb9ca4] Running
	I0528 20:40:32.110287   22579 system_pods.go:89] "kube-vip-ha-908878-m02" [bcbc54fb-d0d4-422a-9e42-d61cd3f390ff] Running
	I0528 20:40:32.110294   22579 system_pods.go:89] "storage-provisioner" [d79872e2-b267-446a-99dc-5bf9f398d31c] Running
	I0528 20:40:32.110302   22579 system_pods.go:126] duration metric: took 208.569354ms to wait for k8s-apps to be running ...
	I0528 20:40:32.110311   22579 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 20:40:32.110363   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:40:32.126032   22579 system_svc.go:56] duration metric: took 15.712055ms WaitForService to wait for kubelet
	I0528 20:40:32.126069   22579 kubeadm.go:576] duration metric: took 20.143813701s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:40:32.126095   22579 node_conditions.go:102] verifying NodePressure condition ...
	I0528 20:40:32.297495   22579 request.go:629] Waited for 171.325182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes
	I0528 20:40:32.297568   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes
	I0528 20:40:32.297575   22579 round_trippers.go:469] Request Headers:
	I0528 20:40:32.297586   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:40:32.297595   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:40:32.301176   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:40:32.302179   22579 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 20:40:32.302203   22579 node_conditions.go:123] node cpu capacity is 2
	I0528 20:40:32.302223   22579 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 20:40:32.302226   22579 node_conditions.go:123] node cpu capacity is 2
	I0528 20:40:32.302230   22579 node_conditions.go:105] duration metric: took 176.129957ms to run NodePressure ...
	I0528 20:40:32.302240   22579 start.go:240] waiting for startup goroutines ...
	I0528 20:40:32.302273   22579 start.go:254] writing updated cluster config ...
	I0528 20:40:32.304519   22579 out.go:177] 
	I0528 20:40:32.306057   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:40:32.306152   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:40:32.307616   22579 out.go:177] * Starting "ha-908878-m03" control-plane node in "ha-908878" cluster
	I0528 20:40:32.308633   22579 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:40:32.308655   22579 cache.go:56] Caching tarball of preloaded images
	I0528 20:40:32.308744   22579 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 20:40:32.308757   22579 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 20:40:32.308858   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:40:32.309023   22579 start.go:360] acquireMachinesLock for ha-908878-m03: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 20:40:32.309063   22579 start.go:364] duration metric: took 22.465µs to acquireMachinesLock for "ha-908878-m03"
	I0528 20:40:32.309079   22579 start.go:93] Provisioning new machine with config: &{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:40:32.309170   22579 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0528 20:40:32.310490   22579 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 20:40:32.310572   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:40:32.310602   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:40:32.325282   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0528 20:40:32.325769   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:40:32.326253   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:40:32.326275   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:40:32.326564   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:40:32.326778   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetMachineName
	I0528 20:40:32.326890   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:40:32.327052   22579 start.go:159] libmachine.API.Create for "ha-908878" (driver="kvm2")
	I0528 20:40:32.327078   22579 client.go:168] LocalClient.Create starting
	I0528 20:40:32.327105   22579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem
	I0528 20:40:32.327137   22579 main.go:141] libmachine: Decoding PEM data...
	I0528 20:40:32.327168   22579 main.go:141] libmachine: Parsing certificate...
	I0528 20:40:32.327215   22579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem
	I0528 20:40:32.327234   22579 main.go:141] libmachine: Decoding PEM data...
	I0528 20:40:32.327246   22579 main.go:141] libmachine: Parsing certificate...
	I0528 20:40:32.327263   22579 main.go:141] libmachine: Running pre-create checks...
	I0528 20:40:32.327276   22579 main.go:141] libmachine: (ha-908878-m03) Calling .PreCreateCheck
	I0528 20:40:32.327406   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetConfigRaw
	I0528 20:40:32.327766   22579 main.go:141] libmachine: Creating machine...
	I0528 20:40:32.327779   22579 main.go:141] libmachine: (ha-908878-m03) Calling .Create
	I0528 20:40:32.327882   22579 main.go:141] libmachine: (ha-908878-m03) Creating KVM machine...
	I0528 20:40:32.328975   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found existing default KVM network
	I0528 20:40:32.329121   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found existing private KVM network mk-ha-908878
	I0528 20:40:32.329218   22579 main.go:141] libmachine: (ha-908878-m03) Setting up store path in /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03 ...
	I0528 20:40:32.329248   22579 main.go:141] libmachine: (ha-908878-m03) Building disk image from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 20:40:32.329322   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:32.329224   23357 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:40:32.329418   22579 main.go:141] libmachine: (ha-908878-m03) Downloading /home/jenkins/minikube-integration/18966-3963/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 20:40:32.547551   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:32.547423   23357 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa...
	I0528 20:40:32.777813   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:32.777665   23357 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/ha-908878-m03.rawdisk...
	I0528 20:40:32.777853   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Writing magic tar header
	I0528 20:40:32.777892   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Writing SSH key tar header
	I0528 20:40:32.777934   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:32.777826   23357 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03 ...
	I0528 20:40:32.777969   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03
	I0528 20:40:32.777995   22579 main.go:141] libmachine: (ha-908878-m03) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03 (perms=drwx------)
	I0528 20:40:32.778011   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines
	I0528 20:40:32.778027   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:40:32.778041   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963
	I0528 20:40:32.778056   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0528 20:40:32.778068   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home/jenkins
	I0528 20:40:32.778080   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Checking permissions on dir: /home
	I0528 20:40:32.778096   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Skipping /home - not owner
	I0528 20:40:32.778109   22579 main.go:141] libmachine: (ha-908878-m03) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines (perms=drwxr-xr-x)
	I0528 20:40:32.778124   22579 main.go:141] libmachine: (ha-908878-m03) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube (perms=drwxr-xr-x)
	I0528 20:40:32.778137   22579 main.go:141] libmachine: (ha-908878-m03) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963 (perms=drwxrwxr-x)
	I0528 20:40:32.778150   22579 main.go:141] libmachine: (ha-908878-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0528 20:40:32.778161   22579 main.go:141] libmachine: (ha-908878-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0528 20:40:32.778212   22579 main.go:141] libmachine: (ha-908878-m03) Creating domain...
	I0528 20:40:32.779207   22579 main.go:141] libmachine: (ha-908878-m03) define libvirt domain using xml: 
	I0528 20:40:32.779231   22579 main.go:141] libmachine: (ha-908878-m03) <domain type='kvm'>
	I0528 20:40:32.779243   22579 main.go:141] libmachine: (ha-908878-m03)   <name>ha-908878-m03</name>
	I0528 20:40:32.779250   22579 main.go:141] libmachine: (ha-908878-m03)   <memory unit='MiB'>2200</memory>
	I0528 20:40:32.779259   22579 main.go:141] libmachine: (ha-908878-m03)   <vcpu>2</vcpu>
	I0528 20:40:32.779265   22579 main.go:141] libmachine: (ha-908878-m03)   <features>
	I0528 20:40:32.779273   22579 main.go:141] libmachine: (ha-908878-m03)     <acpi/>
	I0528 20:40:32.779279   22579 main.go:141] libmachine: (ha-908878-m03)     <apic/>
	I0528 20:40:32.779288   22579 main.go:141] libmachine: (ha-908878-m03)     <pae/>
	I0528 20:40:32.779298   22579 main.go:141] libmachine: (ha-908878-m03)     
	I0528 20:40:32.779308   22579 main.go:141] libmachine: (ha-908878-m03)   </features>
	I0528 20:40:32.779330   22579 main.go:141] libmachine: (ha-908878-m03)   <cpu mode='host-passthrough'>
	I0528 20:40:32.779347   22579 main.go:141] libmachine: (ha-908878-m03)   
	I0528 20:40:32.779356   22579 main.go:141] libmachine: (ha-908878-m03)   </cpu>
	I0528 20:40:32.779362   22579 main.go:141] libmachine: (ha-908878-m03)   <os>
	I0528 20:40:32.779375   22579 main.go:141] libmachine: (ha-908878-m03)     <type>hvm</type>
	I0528 20:40:32.779387   22579 main.go:141] libmachine: (ha-908878-m03)     <boot dev='cdrom'/>
	I0528 20:40:32.779397   22579 main.go:141] libmachine: (ha-908878-m03)     <boot dev='hd'/>
	I0528 20:40:32.779404   22579 main.go:141] libmachine: (ha-908878-m03)     <bootmenu enable='no'/>
	I0528 20:40:32.779412   22579 main.go:141] libmachine: (ha-908878-m03)   </os>
	I0528 20:40:32.779420   22579 main.go:141] libmachine: (ha-908878-m03)   <devices>
	I0528 20:40:32.779430   22579 main.go:141] libmachine: (ha-908878-m03)     <disk type='file' device='cdrom'>
	I0528 20:40:32.779445   22579 main.go:141] libmachine: (ha-908878-m03)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/boot2docker.iso'/>
	I0528 20:40:32.779467   22579 main.go:141] libmachine: (ha-908878-m03)       <target dev='hdc' bus='scsi'/>
	I0528 20:40:32.779480   22579 main.go:141] libmachine: (ha-908878-m03)       <readonly/>
	I0528 20:40:32.779491   22579 main.go:141] libmachine: (ha-908878-m03)     </disk>
	I0528 20:40:32.779502   22579 main.go:141] libmachine: (ha-908878-m03)     <disk type='file' device='disk'>
	I0528 20:40:32.779514   22579 main.go:141] libmachine: (ha-908878-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0528 20:40:32.779522   22579 main.go:141] libmachine: (ha-908878-m03)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/ha-908878-m03.rawdisk'/>
	I0528 20:40:32.779531   22579 main.go:141] libmachine: (ha-908878-m03)       <target dev='hda' bus='virtio'/>
	I0528 20:40:32.779536   22579 main.go:141] libmachine: (ha-908878-m03)     </disk>
	I0528 20:40:32.779542   22579 main.go:141] libmachine: (ha-908878-m03)     <interface type='network'>
	I0528 20:40:32.779547   22579 main.go:141] libmachine: (ha-908878-m03)       <source network='mk-ha-908878'/>
	I0528 20:40:32.779572   22579 main.go:141] libmachine: (ha-908878-m03)       <model type='virtio'/>
	I0528 20:40:32.779595   22579 main.go:141] libmachine: (ha-908878-m03)     </interface>
	I0528 20:40:32.779607   22579 main.go:141] libmachine: (ha-908878-m03)     <interface type='network'>
	I0528 20:40:32.779613   22579 main.go:141] libmachine: (ha-908878-m03)       <source network='default'/>
	I0528 20:40:32.779625   22579 main.go:141] libmachine: (ha-908878-m03)       <model type='virtio'/>
	I0528 20:40:32.779636   22579 main.go:141] libmachine: (ha-908878-m03)     </interface>
	I0528 20:40:32.779646   22579 main.go:141] libmachine: (ha-908878-m03)     <serial type='pty'>
	I0528 20:40:32.779657   22579 main.go:141] libmachine: (ha-908878-m03)       <target port='0'/>
	I0528 20:40:32.779667   22579 main.go:141] libmachine: (ha-908878-m03)     </serial>
	I0528 20:40:32.779680   22579 main.go:141] libmachine: (ha-908878-m03)     <console type='pty'>
	I0528 20:40:32.779690   22579 main.go:141] libmachine: (ha-908878-m03)       <target type='serial' port='0'/>
	I0528 20:40:32.779699   22579 main.go:141] libmachine: (ha-908878-m03)     </console>
	I0528 20:40:32.779705   22579 main.go:141] libmachine: (ha-908878-m03)     <rng model='virtio'>
	I0528 20:40:32.779720   22579 main.go:141] libmachine: (ha-908878-m03)       <backend model='random'>/dev/random</backend>
	I0528 20:40:32.779731   22579 main.go:141] libmachine: (ha-908878-m03)     </rng>
	I0528 20:40:32.779742   22579 main.go:141] libmachine: (ha-908878-m03)     
	I0528 20:40:32.779752   22579 main.go:141] libmachine: (ha-908878-m03)     
	I0528 20:40:32.779760   22579 main.go:141] libmachine: (ha-908878-m03)   </devices>
	I0528 20:40:32.779769   22579 main.go:141] libmachine: (ha-908878-m03) </domain>
	I0528 20:40:32.779779   22579 main.go:141] libmachine: (ha-908878-m03) 
	I0528 20:40:32.785969   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:b7:c2:f7 in network default
	I0528 20:40:32.786495   22579 main.go:141] libmachine: (ha-908878-m03) Ensuring networks are active...
	I0528 20:40:32.786513   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:32.787177   22579 main.go:141] libmachine: (ha-908878-m03) Ensuring network default is active
	I0528 20:40:32.787502   22579 main.go:141] libmachine: (ha-908878-m03) Ensuring network mk-ha-908878 is active
	I0528 20:40:32.787897   22579 main.go:141] libmachine: (ha-908878-m03) Getting domain xml...
	I0528 20:40:32.788680   22579 main.go:141] libmachine: (ha-908878-m03) Creating domain...
	I0528 20:40:34.013976   22579 main.go:141] libmachine: (ha-908878-m03) Waiting to get IP...
	I0528 20:40:34.014793   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:34.015195   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:34.015234   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:34.015178   23357 retry.go:31] will retry after 286.936339ms: waiting for machine to come up
	I0528 20:40:34.303824   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:34.304264   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:34.304285   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:34.304222   23357 retry.go:31] will retry after 285.998635ms: waiting for machine to come up
	I0528 20:40:34.591687   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:34.592185   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:34.592210   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:34.592145   23357 retry.go:31] will retry after 486.004926ms: waiting for machine to come up
	I0528 20:40:35.079894   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:35.080366   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:35.080387   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:35.080333   23357 retry.go:31] will retry after 430.172641ms: waiting for machine to come up
	I0528 20:40:35.512130   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:35.512597   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:35.512627   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:35.512550   23357 retry.go:31] will retry after 655.401985ms: waiting for machine to come up
	I0528 20:40:36.169262   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:36.169688   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:36.169718   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:36.169639   23357 retry.go:31] will retry after 953.090401ms: waiting for machine to come up
	I0528 20:40:37.124742   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:37.125027   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:37.125049   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:37.125009   23357 retry.go:31] will retry after 933.575405ms: waiting for machine to come up
	I0528 20:40:38.059832   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:38.060305   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:38.060332   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:38.060253   23357 retry.go:31] will retry after 933.852896ms: waiting for machine to come up
	I0528 20:40:38.995421   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:38.995923   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:38.995949   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:38.995867   23357 retry.go:31] will retry after 1.701447515s: waiting for machine to come up
	I0528 20:40:40.699010   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:40.699492   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:40.699517   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:40.699450   23357 retry.go:31] will retry after 1.616110377s: waiting for machine to come up
	I0528 20:40:42.318070   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:42.318522   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:42.318561   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:42.318452   23357 retry.go:31] will retry after 2.231719862s: waiting for machine to come up
	I0528 20:40:44.553111   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:44.553614   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:44.553644   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:44.553577   23357 retry.go:31] will retry after 2.63642465s: waiting for machine to come up
	I0528 20:40:47.191927   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:47.192245   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:47.192265   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:47.192220   23357 retry.go:31] will retry after 3.239065222s: waiting for machine to come up
	I0528 20:40:50.434633   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:50.435003   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find current IP address of domain ha-908878-m03 in network mk-ha-908878
	I0528 20:40:50.435025   22579 main.go:141] libmachine: (ha-908878-m03) DBG | I0528 20:40:50.434967   23357 retry.go:31] will retry after 5.565960506s: waiting for machine to come up
	I0528 20:40:56.004958   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:56.005405   22579 main.go:141] libmachine: (ha-908878-m03) Found IP for machine: 192.168.39.73
	I0528 20:40:56.005430   22579 main.go:141] libmachine: (ha-908878-m03) Reserving static IP address...
	I0528 20:40:56.005443   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has current primary IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:56.005865   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find host DHCP lease matching {name: "ha-908878-m03", mac: "52:54:00:92:3d:20", ip: "192.168.39.73"} in network mk-ha-908878
	I0528 20:40:56.074484   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Getting to WaitForSSH function...
	I0528 20:40:56.074514   22579 main.go:141] libmachine: (ha-908878-m03) Reserved static IP address: 192.168.39.73
	I0528 20:40:56.074530   22579 main.go:141] libmachine: (ha-908878-m03) Waiting for SSH to be available...
	I0528 20:40:56.076890   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:56.077254   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878
	I0528 20:40:56.077279   22579 main.go:141] libmachine: (ha-908878-m03) DBG | unable to find defined IP address of network mk-ha-908878 interface with MAC address 52:54:00:92:3d:20
	I0528 20:40:56.077406   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Using SSH client type: external
	I0528 20:40:56.077429   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa (-rw-------)
	I0528 20:40:56.077461   22579 main.go:141] libmachine: (ha-908878-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 20:40:56.077471   22579 main.go:141] libmachine: (ha-908878-m03) DBG | About to run SSH command:
	I0528 20:40:56.077483   22579 main.go:141] libmachine: (ha-908878-m03) DBG | exit 0
	I0528 20:40:56.081665   22579 main.go:141] libmachine: (ha-908878-m03) DBG | SSH cmd err, output: exit status 255: 
	I0528 20:40:56.081681   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0528 20:40:56.081688   22579 main.go:141] libmachine: (ha-908878-m03) DBG | command : exit 0
	I0528 20:40:56.081697   22579 main.go:141] libmachine: (ha-908878-m03) DBG | err     : exit status 255
	I0528 20:40:56.081729   22579 main.go:141] libmachine: (ha-908878-m03) DBG | output  : 
	I0528 20:40:59.081870   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Getting to WaitForSSH function...
	I0528 20:40:59.084087   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.084505   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.084527   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.084694   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Using SSH client type: external
	I0528 20:40:59.084722   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa (-rw-------)
	I0528 20:40:59.084750   22579 main.go:141] libmachine: (ha-908878-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 20:40:59.084768   22579 main.go:141] libmachine: (ha-908878-m03) DBG | About to run SSH command:
	I0528 20:40:59.084781   22579 main.go:141] libmachine: (ha-908878-m03) DBG | exit 0
	I0528 20:40:59.217703   22579 main.go:141] libmachine: (ha-908878-m03) DBG | SSH cmd err, output: <nil>: 
	I0528 20:40:59.218058   22579 main.go:141] libmachine: (ha-908878-m03) KVM machine creation complete!
	I0528 20:40:59.218352   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetConfigRaw
	I0528 20:40:59.218867   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:40:59.219065   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:40:59.219251   22579 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0528 20:40:59.219267   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetState
	I0528 20:40:59.220625   22579 main.go:141] libmachine: Detecting operating system of created instance...
	I0528 20:40:59.220639   22579 main.go:141] libmachine: Waiting for SSH to be available...
	I0528 20:40:59.220644   22579 main.go:141] libmachine: Getting to WaitForSSH function...
	I0528 20:40:59.220650   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:40:59.222765   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.223152   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.223181   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.223366   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:40:59.223559   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.223699   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.223852   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:40:59.224054   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:40:59.224236   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0528 20:40:59.224247   22579 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0528 20:40:59.337067   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:40:59.337086   22579 main.go:141] libmachine: Detecting the provisioner...
	I0528 20:40:59.337094   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:40:59.339822   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.340220   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.340249   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.340378   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:40:59.340608   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.340739   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.340863   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:40:59.341022   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:40:59.341251   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0528 20:40:59.341265   22579 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0528 20:40:59.454410   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0528 20:40:59.454467   22579 main.go:141] libmachine: found compatible host: buildroot
	I0528 20:40:59.454477   22579 main.go:141] libmachine: Provisioning with buildroot...
	I0528 20:40:59.454491   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetMachineName
	I0528 20:40:59.454715   22579 buildroot.go:166] provisioning hostname "ha-908878-m03"
	I0528 20:40:59.454738   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetMachineName
	I0528 20:40:59.454931   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:40:59.457481   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.457908   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.457937   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.457996   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:40:59.458153   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.458298   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.458446   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:40:59.458613   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:40:59.458769   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0528 20:40:59.458781   22579 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-908878-m03 && echo "ha-908878-m03" | sudo tee /etc/hostname
	I0528 20:40:59.585371   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-908878-m03
	
	I0528 20:40:59.585412   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:40:59.587939   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.588326   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.588357   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.588503   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:40:59.588763   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.588952   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.589112   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:40:59.589291   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:40:59.589493   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0528 20:40:59.589518   22579 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-908878-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-908878-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-908878-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 20:40:59.711306   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:40:59.711331   22579 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 20:40:59.711345   22579 buildroot.go:174] setting up certificates
	I0528 20:40:59.711355   22579 provision.go:84] configureAuth start
	I0528 20:40:59.711367   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetMachineName
	I0528 20:40:59.711644   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:40:59.714387   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.714764   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.714793   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.714910   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:40:59.717214   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.717616   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.717644   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.717798   22579 provision.go:143] copyHostCerts
	I0528 20:40:59.717830   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:40:59.717868   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 20:40:59.717880   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:40:59.717959   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 20:40:59.718054   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:40:59.718078   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 20:40:59.718087   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:40:59.718123   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 20:40:59.718190   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:40:59.718215   22579 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 20:40:59.718224   22579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:40:59.718266   22579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 20:40:59.718354   22579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.ha-908878-m03 san=[127.0.0.1 192.168.39.73 ha-908878-m03 localhost minikube]
	I0528 20:40:59.898087   22579 provision.go:177] copyRemoteCerts
	I0528 20:40:59.898139   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 20:40:59.898161   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:40:59.900892   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.901581   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:40:59.901614   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:40:59.901792   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:40:59.901976   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:40:59.902108   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:40:59.902249   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:40:59.988393   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0528 20:40:59.988475   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 20:41:00.012880   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0528 20:41:00.012967   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 20:41:00.036809   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0528 20:41:00.036890   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 20:41:00.067715   22579 provision.go:87] duration metric: took 356.347821ms to configureAuth
	I0528 20:41:00.067746   22579 buildroot.go:189] setting minikube options for container-runtime
	I0528 20:41:00.067971   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:41:00.068060   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:41:00.070792   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.071208   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.071237   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.071394   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:41:00.071606   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.071775   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.071896   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:41:00.072116   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:41:00.072269   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0528 20:41:00.072283   22579 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 20:41:00.354424   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 20:41:00.354456   22579 main.go:141] libmachine: Checking connection to Docker...
	I0528 20:41:00.354469   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetURL
	I0528 20:41:00.355955   22579 main.go:141] libmachine: (ha-908878-m03) DBG | Using libvirt version 6000000
	I0528 20:41:00.358290   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.358680   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.358711   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.358865   22579 main.go:141] libmachine: Docker is up and running!
	I0528 20:41:00.358877   22579 main.go:141] libmachine: Reticulating splines...
	I0528 20:41:00.358883   22579 client.go:171] duration metric: took 28.031799176s to LocalClient.Create
	I0528 20:41:00.358904   22579 start.go:167] duration metric: took 28.031853438s to libmachine.API.Create "ha-908878"
	I0528 20:41:00.358916   22579 start.go:293] postStartSetup for "ha-908878-m03" (driver="kvm2")
	I0528 20:41:00.358932   22579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 20:41:00.358953   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:41:00.359201   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 20:41:00.359221   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:41:00.361345   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.361700   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.361728   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.361893   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:41:00.362095   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.362258   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:41:00.362396   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:41:00.448222   22579 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 20:41:00.452456   22579 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 20:41:00.452477   22579 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 20:41:00.452536   22579 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 20:41:00.452601   22579 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 20:41:00.452610   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /etc/ssl/certs/117602.pem
	I0528 20:41:00.452684   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 20:41:00.462901   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:41:00.488691   22579 start.go:296] duration metric: took 129.762748ms for postStartSetup
	I0528 20:41:00.488733   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetConfigRaw
	I0528 20:41:00.489250   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:41:00.491626   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.491981   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.492008   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.492250   22579 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:41:00.492428   22579 start.go:128] duration metric: took 28.183249732s to createHost
	I0528 20:41:00.492449   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:41:00.494554   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.494899   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.494920   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.495085   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:41:00.495257   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.495411   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.495596   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:41:00.495738   22579 main.go:141] libmachine: Using SSH client type: native
	I0528 20:41:00.495905   22579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0528 20:41:00.495922   22579 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 20:41:00.606282   22579 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716928860.588131919
	
	I0528 20:41:00.606299   22579 fix.go:216] guest clock: 1716928860.588131919
	I0528 20:41:00.606306   22579 fix.go:229] Guest: 2024-05-28 20:41:00.588131919 +0000 UTC Remote: 2024-05-28 20:41:00.492438726 +0000 UTC m=+152.016726426 (delta=95.693193ms)
	I0528 20:41:00.606319   22579 fix.go:200] guest clock delta is within tolerance: 95.693193ms
	I0528 20:41:00.606324   22579 start.go:83] releasing machines lock for "ha-908878-m03", held for 28.297252585s
	I0528 20:41:00.606341   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:41:00.606568   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:41:00.609116   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.609475   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.609503   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.611857   22579 out.go:177] * Found network options:
	I0528 20:41:00.613264   22579 out.go:177]   - NO_PROXY=192.168.39.100,192.168.39.239
	W0528 20:41:00.614453   22579 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 20:41:00.614480   22579 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 20:41:00.614496   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:41:00.614990   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:41:00.615163   22579 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:41:00.615264   22579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 20:41:00.615306   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	W0528 20:41:00.615347   22579 proxy.go:119] fail to check proxy env: Error ip not in block
	W0528 20:41:00.615372   22579 proxy.go:119] fail to check proxy env: Error ip not in block
	I0528 20:41:00.615437   22579 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 20:41:00.615458   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:41:00.617989   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.618208   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.618411   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.618439   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.618608   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:41:00.618756   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.618766   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:00.618786   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:00.618928   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:41:00.618946   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:41:00.619096   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:41:00.619088   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:41:00.619222   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:41:00.619353   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:41:00.856279   22579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 20:41:00.862437   22579 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 20:41:00.862494   22579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 20:41:00.879166   22579 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 20:41:00.879190   22579 start.go:494] detecting cgroup driver to use...
	I0528 20:41:00.879252   22579 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 20:41:00.896017   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 20:41:00.909602   22579 docker.go:217] disabling cri-docker service (if available) ...
	I0528 20:41:00.909651   22579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 20:41:00.924954   22579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 20:41:00.940065   22579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 20:41:01.053520   22579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 20:41:01.204877   22579 docker.go:233] disabling docker service ...
	I0528 20:41:01.204948   22579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 20:41:01.220221   22579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 20:41:01.233164   22579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 20:41:01.370367   22579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 20:41:01.495497   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 20:41:01.510142   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 20:41:01.529604   22579 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 20:41:01.529668   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.540330   22579 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 20:41:01.540390   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.551028   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.561469   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.572897   22579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 20:41:01.584697   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.597498   22579 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.618112   22579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:41:01.629331   22579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 20:41:01.639391   22579 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 20:41:01.639445   22579 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 20:41:01.652370   22579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 20:41:01.662436   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:41:01.792319   22579 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 20:41:01.928887   22579 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 20:41:01.928968   22579 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 20:41:01.933740   22579 start.go:562] Will wait 60s for crictl version
	I0528 20:41:01.933809   22579 ssh_runner.go:195] Run: which crictl
	I0528 20:41:01.937541   22579 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 20:41:01.976649   22579 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 20:41:01.976735   22579 ssh_runner.go:195] Run: crio --version
	I0528 20:41:02.005833   22579 ssh_runner.go:195] Run: crio --version
	I0528 20:41:02.037660   22579 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 20:41:02.038815   22579 out.go:177]   - env NO_PROXY=192.168.39.100
	I0528 20:41:02.040107   22579 out.go:177]   - env NO_PROXY=192.168.39.100,192.168.39.239
	I0528 20:41:02.041333   22579 main.go:141] libmachine: (ha-908878-m03) Calling .GetIP
	I0528 20:41:02.043750   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:02.044044   22579 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:41:02.044076   22579 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:41:02.044253   22579 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 20:41:02.048567   22579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:41:02.062479   22579 mustload.go:65] Loading cluster: ha-908878
	I0528 20:41:02.062721   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:41:02.063015   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:41:02.063055   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:41:02.077127   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37145
	I0528 20:41:02.077499   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:41:02.077990   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:41:02.078012   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:41:02.078321   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:41:02.078511   22579 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:41:02.079938   22579 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:41:02.080215   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:41:02.080246   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:41:02.094090   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39311
	I0528 20:41:02.094479   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:41:02.094947   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:41:02.094964   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:41:02.095254   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:41:02.095454   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:41:02.095624   22579 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878 for IP: 192.168.39.73
	I0528 20:41:02.095633   22579 certs.go:194] generating shared ca certs ...
	I0528 20:41:02.095645   22579 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:41:02.095771   22579 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 20:41:02.095830   22579 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 20:41:02.095843   22579 certs.go:256] generating profile certs ...
	I0528 20:41:02.095930   22579 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key
	I0528 20:41:02.095960   22579 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.25750a69
	I0528 20:41:02.095977   22579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.25750a69 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.239 192.168.39.73 192.168.39.254]
	I0528 20:41:02.254924   22579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.25750a69 ...
	I0528 20:41:02.254954   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.25750a69: {Name:mk58313499148b52ec97dc34165b38b9ed8d227b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:41:02.255108   22579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.25750a69 ...
	I0528 20:41:02.255122   22579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.25750a69: {Name:mk956dafa3c18b705956b9d3cb0dd665fa1d7a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:41:02.255189   22579 certs.go:381] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.25750a69 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt
	I0528 20:41:02.255315   22579 certs.go:385] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.25750a69 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key
	I0528 20:41:02.255428   22579 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key
	I0528 20:41:02.255441   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 20:41:02.255453   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0528 20:41:02.255464   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 20:41:02.255479   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 20:41:02.255494   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 20:41:02.255506   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 20:41:02.255518   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 20:41:02.255531   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 20:41:02.255578   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 20:41:02.255604   22579 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 20:41:02.255613   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 20:41:02.255633   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 20:41:02.255654   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 20:41:02.255676   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 20:41:02.255711   22579 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:41:02.255735   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /usr/share/ca-certificates/117602.pem
	I0528 20:41:02.255749   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:41:02.255760   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem -> /usr/share/ca-certificates/11760.pem
	I0528 20:41:02.255789   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:41:02.258851   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:41:02.259277   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:41:02.259304   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:41:02.259475   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:41:02.259647   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:41:02.259760   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:41:02.259855   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:41:02.338012   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0528 20:41:02.343982   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0528 20:41:02.355493   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0528 20:41:02.359726   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0528 20:41:02.370384   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0528 20:41:02.375348   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0528 20:41:02.387184   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0528 20:41:02.391211   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0528 20:41:02.402117   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0528 20:41:02.407250   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0528 20:41:02.420121   22579 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0528 20:41:02.424452   22579 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0528 20:41:02.435455   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 20:41:02.462917   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 20:41:02.488517   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 20:41:02.511647   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 20:41:02.533936   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0528 20:41:02.556162   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 20:41:02.578549   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 20:41:02.601950   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 20:41:02.627962   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 20:41:02.652566   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 20:41:02.678156   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 20:41:02.702155   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0528 20:41:02.718360   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0528 20:41:02.736350   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0528 20:41:02.752301   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0528 20:41:02.768517   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0528 20:41:02.784318   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0528 20:41:02.799999   22579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0528 20:41:02.815934   22579 ssh_runner.go:195] Run: openssl version
	I0528 20:41:02.821967   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 20:41:02.834372   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 20:41:02.839042   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 20:41:02.839089   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 20:41:02.845026   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 20:41:02.857373   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 20:41:02.870549   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:41:02.875252   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:41:02.875319   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:41:02.881064   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 20:41:02.892281   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 20:41:02.903533   22579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 20:41:02.907870   22579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 20:41:02.907922   22579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 20:41:02.913242   22579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 20:41:02.925437   22579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 20:41:02.929789   22579 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 20:41:02.929847   22579 kubeadm.go:928] updating node {m03 192.168.39.73 8443 v1.30.1 crio true true} ...
	I0528 20:41:02.929930   22579 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-908878-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 20:41:02.929965   22579 kube-vip.go:115] generating kube-vip config ...
	I0528 20:41:02.929993   22579 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 20:41:02.945244   22579 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 20:41:02.945296   22579 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0528 20:41:02.945338   22579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 20:41:02.955092   22579 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0528 20:41:02.955149   22579 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0528 20:41:02.964780   22579 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0528 20:41:02.964801   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 20:41:02.964812   22579 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0528 20:41:02.964836   22579 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0528 20:41:02.964855   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:41:02.964856   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 20:41:02.964872   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 20:41:02.964917   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 20:41:02.969140   22579 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0528 20:41:02.969162   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0528 20:41:03.010500   22579 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0528 20:41:03.010507   22579 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 20:41:03.010560   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0528 20:41:03.010643   22579 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 20:41:03.057961   22579 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0528 20:41:03.058002   22579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0528 20:41:03.905521   22579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0528 20:41:03.916413   22579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0528 20:41:03.933857   22579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 20:41:03.950796   22579 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0528 20:41:03.969238   22579 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0528 20:41:03.973578   22579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:41:03.987320   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:41:04.124115   22579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:41:04.141725   22579 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:41:04.142097   22579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:41:04.142137   22579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:41:04.157653   22579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36039
	I0528 20:41:04.158148   22579 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:41:04.158681   22579 main.go:141] libmachine: Using API Version  1
	I0528 20:41:04.158706   22579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:41:04.158998   22579 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:41:04.159375   22579 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:41:04.159565   22579 start.go:316] joinCluster: &{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:41:04.159677   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0528 20:41:04.159692   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:41:04.162581   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:41:04.162955   22579 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:41:04.162982   22579 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:41:04.163126   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:41:04.163302   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:41:04.163464   22579 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:41:04.163593   22579 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:41:04.328854   22579 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:41:04.328907   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k1hlwe.i66bv2ctvga46c3g --discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-908878-m03 --control-plane --apiserver-advertise-address=192.168.39.73 --apiserver-bind-port=8443"
	I0528 20:41:27.532526   22579 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k1hlwe.i66bv2ctvga46c3g --discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-908878-m03 --control-plane --apiserver-advertise-address=192.168.39.73 --apiserver-bind-port=8443": (23.203579275s)
	I0528 20:41:27.532567   22579 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0528 20:41:28.045867   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-908878-m03 minikube.k8s.io/updated_at=2024_05_28T20_41_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=ha-908878 minikube.k8s.io/primary=false
	I0528 20:41:28.166277   22579 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-908878-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0528 20:41:28.280172   22579 start.go:318] duration metric: took 24.120602222s to joinCluster
	I0528 20:41:28.280242   22579 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 20:41:28.281519   22579 out.go:177] * Verifying Kubernetes components...
	I0528 20:41:28.280514   22579 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:41:28.282678   22579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:41:28.558017   22579 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:41:28.575792   22579 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:41:28.576116   22579 kapi.go:59] client config for ha-908878: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.crt", KeyFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key", CAFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf8220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0528 20:41:28.576202   22579 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.100:8443
	I0528 20:41:28.576472   22579 node_ready.go:35] waiting up to 6m0s for node "ha-908878-m03" to be "Ready" ...
	I0528 20:41:28.576551   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:28.576561   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:28.576573   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:28.576582   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:28.581244   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:29.076650   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:29.076679   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:29.076689   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:29.076694   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:29.080062   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:29.576973   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:29.577002   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:29.577013   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:29.577019   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:29.580386   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:30.077582   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:30.077602   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:30.077608   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:30.077612   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:30.080333   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:30.577164   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:30.577189   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:30.577201   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:30.577206   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:30.580013   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:30.580826   22579 node_ready.go:53] node "ha-908878-m03" has status "Ready":"False"
	I0528 20:41:31.076834   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:31.076858   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:31.076869   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:31.076876   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:31.080069   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:31.577469   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:31.577497   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:31.577507   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:31.577513   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:31.581059   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:32.076832   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:32.076855   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:32.076865   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:32.076871   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:32.081751   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:32.577063   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:32.577086   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:32.577093   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:32.577097   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:32.582036   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:32.583242   22579 node_ready.go:53] node "ha-908878-m03" has status "Ready":"False"
	I0528 20:41:33.077662   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:33.077685   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:33.077693   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:33.077697   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:33.081149   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:33.577516   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:33.577538   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:33.577548   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:33.577552   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:33.582083   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:34.077616   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:34.077638   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:34.077648   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:34.077655   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:34.081428   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:34.577011   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:34.577035   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:34.577043   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:34.577050   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:34.580350   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:35.077384   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:35.077404   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.077429   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.077433   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.080731   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:35.081469   22579 node_ready.go:49] node "ha-908878-m03" has status "Ready":"True"
	I0528 20:41:35.081490   22579 node_ready.go:38] duration metric: took 6.504999349s for node "ha-908878-m03" to be "Ready" ...
	I0528 20:41:35.081498   22579 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:41:35.081546   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:41:35.081555   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.081562   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.081567   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.087521   22579 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 20:41:35.093456   22579 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5fmns" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.093524   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5fmns
	I0528 20:41:35.093529   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.093535   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.093538   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.096612   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:35.097689   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:35.097703   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.097710   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.097713   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.100145   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:35.100788   22579 pod_ready.go:92] pod "coredns-7db6d8ff4d-5fmns" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:35.100804   22579 pod_ready.go:81] duration metric: took 7.327582ms for pod "coredns-7db6d8ff4d-5fmns" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.100811   22579 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mvx67" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.100855   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mvx67
	I0528 20:41:35.100863   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.100869   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.100873   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.103504   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:35.104108   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:35.104124   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.104131   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.104134   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.106626   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:35.107168   22579 pod_ready.go:92] pod "coredns-7db6d8ff4d-mvx67" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:35.107186   22579 pod_ready.go:81] duration metric: took 6.368888ms for pod "coredns-7db6d8ff4d-mvx67" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.107199   22579 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.107261   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878
	I0528 20:41:35.107274   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.107284   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.107289   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.109851   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:35.110371   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:35.110384   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.110391   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.110395   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.113062   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:35.113587   22579 pod_ready.go:92] pod "etcd-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:35.113602   22579 pod_ready.go:81] duration metric: took 6.39665ms for pod "etcd-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.113609   22579 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.113645   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m02
	I0528 20:41:35.113652   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.113658   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.113662   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.116849   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:35.117944   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:35.117960   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.117971   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.117977   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.120520   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:35.120945   22579 pod_ready.go:92] pod "etcd-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:35.120959   22579 pod_ready.go:81] duration metric: took 7.345393ms for pod "etcd-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.120967   22579 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:35.278365   22579 request.go:629] Waited for 157.321448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:35.278446   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:35.278455   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.278462   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.278469   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.281934   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:35.478332   22579 request.go:629] Waited for 195.274194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:35.478388   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:35.478393   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.478400   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.478408   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.482490   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:35.677814   22579 request.go:629] Waited for 56.219595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:35.677881   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:35.677888   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.677902   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.677911   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.682013   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:35.878393   22579 request.go:629] Waited for 195.365934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:35.878445   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:35.878450   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:35.878457   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:35.878470   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:35.881747   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:36.121555   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:36.121595   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:36.121606   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:36.121612   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:36.124169   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:36.278240   22579 request.go:629] Waited for 153.312957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:36.278307   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:36.278314   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:36.278324   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:36.278333   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:36.282054   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:36.621744   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:36.621777   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:36.621783   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:36.621785   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:36.624904   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:36.677981   22579 request.go:629] Waited for 52.231287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:36.678067   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:36.678079   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:36.678090   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:36.678095   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:36.680792   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:37.121256   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:37.121276   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:37.121284   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:37.121288   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:37.124591   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:37.125280   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:37.125295   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:37.125302   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:37.125307   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:37.127810   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:37.128328   22579 pod_ready.go:102] pod "etcd-ha-908878-m03" in "kube-system" namespace has status "Ready":"False"
	I0528 20:41:37.621136   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:37.621157   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:37.621164   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:37.621169   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:37.624348   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:37.625160   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:37.625175   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:37.625182   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:37.625189   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:37.627751   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:38.121329   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:38.121357   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:38.121384   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:38.121389   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:38.126397   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:38.127043   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:38.127060   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:38.127068   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:38.127071   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:38.129636   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:38.621865   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:38.621886   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:38.621893   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:38.621898   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:38.624799   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:38.625730   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:38.625744   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:38.625751   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:38.625755   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:38.628377   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:39.121835   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:39.121856   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:39.121864   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:39.121869   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:39.124910   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:39.125636   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:39.125653   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:39.125663   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:39.125669   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:39.128117   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:39.128678   22579 pod_ready.go:102] pod "etcd-ha-908878-m03" in "kube-system" namespace has status "Ready":"False"
	I0528 20:41:39.622027   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:39.622052   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:39.622065   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:39.622070   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:39.625337   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:39.626327   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:39.626344   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:39.626351   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:39.626354   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:39.628950   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:40.121997   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:40.122023   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:40.122034   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:40.122040   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:40.125013   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:40.125637   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:40.125654   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:40.125663   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:40.125668   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:40.129110   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:40.621278   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:40.621297   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:40.621305   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:40.621311   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:40.624393   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:40.625284   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:40.625302   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:40.625310   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:40.625316   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:40.630402   22579 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0528 20:41:41.122042   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:41.122065   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:41.122076   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:41.122081   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:41.126202   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:41.126901   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:41.126919   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:41.126929   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:41.126935   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:41.130537   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:41.131125   22579 pod_ready.go:102] pod "etcd-ha-908878-m03" in "kube-system" namespace has status "Ready":"False"
	I0528 20:41:41.621967   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:41.621995   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:41.622013   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:41.622019   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:41.624840   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:41.625425   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:41.625439   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:41.625445   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:41.625449   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:41.628217   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:42.121149   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:42.121170   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:42.121177   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:42.121181   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:42.124086   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:42.125061   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:42.125074   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:42.125081   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:42.125084   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:42.129416   22579 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0528 20:41:42.621323   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:42.621348   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:42.621359   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:42.621365   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:42.625262   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:42.626006   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:42.626021   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:42.626028   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:42.626031   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:42.628611   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:43.121573   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:43.121605   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:43.121613   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:43.121616   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:43.124869   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:43.125691   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:43.125705   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:43.125712   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:43.125716   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:43.128245   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:43.621547   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:43.621577   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:43.621587   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:43.621590   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:43.625259   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:43.625865   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:43.625881   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:43.625888   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:43.625892   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:43.628340   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:43.628869   22579 pod_ready.go:102] pod "etcd-ha-908878-m03" in "kube-system" namespace has status "Ready":"False"
	I0528 20:41:44.121837   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:44.121866   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.121878   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.121885   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.125024   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:44.125895   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:44.125915   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.125924   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.125928   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.128451   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.621119   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-908878-m03
	I0528 20:41:44.621139   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.621147   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.621150   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.624180   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:44.624972   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:44.624992   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.625002   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.625010   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.627772   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.628401   22579 pod_ready.go:92] pod "etcd-ha-908878-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:44.628421   22579 pod_ready.go:81] duration metric: took 9.50744498s for pod "etcd-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.628441   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.628511   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-908878
	I0528 20:41:44.628525   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.628535   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.628544   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.631158   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.631744   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:44.631761   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.631768   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.631772   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.634025   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.634480   22579 pod_ready.go:92] pod "kube-apiserver-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:44.634497   22579 pod_ready.go:81] duration metric: took 6.044261ms for pod "kube-apiserver-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.634507   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.634565   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-908878-m02
	I0528 20:41:44.634576   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.634586   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.634596   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.636672   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.637258   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:44.637273   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.637280   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.637284   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.639578   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.640142   22579 pod_ready.go:92] pod "kube-apiserver-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:44.640158   22579 pod_ready.go:81] duration metric: took 5.643738ms for pod "kube-apiserver-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.640166   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.640216   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-908878-m03
	I0528 20:41:44.640224   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.640230   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.640237   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.642688   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.643440   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:44.643453   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.643460   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.643464   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.646255   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.646798   22579 pod_ready.go:92] pod "kube-apiserver-ha-908878-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:44.646821   22579 pod_ready.go:81] duration metric: took 6.642368ms for pod "kube-apiserver-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.646832   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.646883   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878
	I0528 20:41:44.646893   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.646904   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.646914   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.650103   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:44.677820   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:44.677834   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.677842   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.677846   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.680523   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:44.680918   22579 pod_ready.go:92] pod "kube-controller-manager-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:44.680933   22579 pod_ready.go:81] duration metric: took 34.091199ms for pod "kube-controller-manager-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.680953   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:44.878394   22579 request.go:629] Waited for 197.354576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878-m02
	I0528 20:41:44.878465   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878-m02
	I0528 20:41:44.878472   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:44.878482   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:44.878488   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:44.881733   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:45.077835   22579 request.go:629] Waited for 195.319662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:45.077923   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:45.077934   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:45.077945   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:45.077952   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:45.081869   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:45.082908   22579 pod_ready.go:92] pod "kube-controller-manager-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:45.082930   22579 pod_ready.go:81] duration metric: took 401.970164ms for pod "kube-controller-manager-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:45.082943   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:45.278017   22579 request.go:629] Waited for 194.999461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878-m03
	I0528 20:41:45.278102   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-908878-m03
	I0528 20:41:45.278111   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:45.278122   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:45.278143   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:45.281456   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:45.478467   22579 request.go:629] Waited for 196.368725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:45.478518   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:45.478523   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:45.478530   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:45.478535   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:45.481621   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:45.482212   22579 pod_ready.go:92] pod "kube-controller-manager-ha-908878-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:45.482230   22579 pod_ready.go:81] duration metric: took 399.279724ms for pod "kube-controller-manager-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:45.482240   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4vjp6" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:45.678355   22579 request.go:629] Waited for 196.03886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4vjp6
	I0528 20:41:45.678412   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4vjp6
	I0528 20:41:45.678418   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:45.678426   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:45.678430   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:45.681644   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:45.877836   22579 request.go:629] Waited for 195.316455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:45.877906   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:45.877913   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:45.877920   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:45.877926   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:45.880825   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:45.881470   22579 pod_ready.go:92] pod "kube-proxy-4vjp6" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:45.881490   22579 pod_ready.go:81] duration metric: took 399.243929ms for pod "kube-proxy-4vjp6" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:45.881504   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ng8mq" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:46.077466   22579 request.go:629] Waited for 195.898762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ng8mq
	I0528 20:41:46.077557   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ng8mq
	I0528 20:41:46.077568   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:46.077575   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:46.077579   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:46.080532   22579 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0528 20:41:46.277396   22579 request.go:629] Waited for 196.114941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:46.277447   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:46.277454   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:46.277462   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:46.277469   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:46.280545   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:46.281104   22579 pod_ready.go:92] pod "kube-proxy-ng8mq" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:46.281121   22579 pod_ready.go:81] duration metric: took 399.610916ms for pod "kube-proxy-ng8mq" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:46.281130   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pg89k" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:46.478401   22579 request.go:629] Waited for 197.207302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pg89k
	I0528 20:41:46.478448   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pg89k
	I0528 20:41:46.478453   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:46.478463   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:46.478470   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:46.481950   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:46.678208   22579 request.go:629] Waited for 195.338894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:46.678279   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:46.678284   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:46.678292   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:46.678300   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:46.681777   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:46.682526   22579 pod_ready.go:92] pod "kube-proxy-pg89k" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:46.682545   22579 pod_ready.go:81] duration metric: took 401.409669ms for pod "kube-proxy-pg89k" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:46.682554   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:46.877586   22579 request.go:629] Waited for 194.974945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878
	I0528 20:41:46.877640   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878
	I0528 20:41:46.877646   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:46.877654   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:46.877659   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:46.880932   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:47.078104   22579 request.go:629] Waited for 196.356071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:47.078162   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878
	I0528 20:41:47.078177   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:47.078189   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:47.078205   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:47.081375   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:47.082233   22579 pod_ready.go:92] pod "kube-scheduler-ha-908878" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:47.082256   22579 pod_ready.go:81] duration metric: took 399.695122ms for pod "kube-scheduler-ha-908878" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:47.082269   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:47.277946   22579 request.go:629] Waited for 195.584259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878-m02
	I0528 20:41:47.278014   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878-m02
	I0528 20:41:47.278020   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:47.278027   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:47.278031   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:47.281661   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:47.477823   22579 request.go:629] Waited for 195.407332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:47.477899   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m02
	I0528 20:41:47.477910   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:47.477921   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:47.477932   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:47.481276   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:47.481960   22579 pod_ready.go:92] pod "kube-scheduler-ha-908878-m02" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:47.481979   22579 pod_ready.go:81] duration metric: took 399.698411ms for pod "kube-scheduler-ha-908878-m02" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:47.481991   22579 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:47.678063   22579 request.go:629] Waited for 196.000158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878-m03
	I0528 20:41:47.678139   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-908878-m03
	I0528 20:41:47.678146   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:47.678157   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:47.678169   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:47.681293   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:47.878397   22579 request.go:629] Waited for 196.378653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:47.878468   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-908878-m03
	I0528 20:41:47.878476   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:47.878487   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:47.878493   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:47.881699   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:47.882219   22579 pod_ready.go:92] pod "kube-scheduler-ha-908878-m03" in "kube-system" namespace has status "Ready":"True"
	I0528 20:41:47.882236   22579 pod_ready.go:81] duration metric: took 400.237383ms for pod "kube-scheduler-ha-908878-m03" in "kube-system" namespace to be "Ready" ...
	I0528 20:41:47.882248   22579 pod_ready.go:38] duration metric: took 12.800741549s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:41:47.882266   22579 api_server.go:52] waiting for apiserver process to appear ...
	I0528 20:41:47.882312   22579 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:41:47.898554   22579 api_server.go:72] duration metric: took 19.618274134s to wait for apiserver process to appear ...
	I0528 20:41:47.898575   22579 api_server.go:88] waiting for apiserver healthz status ...
	I0528 20:41:47.898594   22579 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0528 20:41:47.903138   22579 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0528 20:41:47.903203   22579 round_trippers.go:463] GET https://192.168.39.100:8443/version
	I0528 20:41:47.903214   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:47.903225   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:47.903233   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:47.904161   22579 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0528 20:41:47.904277   22579 api_server.go:141] control plane version: v1.30.1
	I0528 20:41:47.904296   22579 api_server.go:131] duration metric: took 5.714061ms to wait for apiserver health ...
	I0528 20:41:47.904306   22579 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 20:41:48.077697   22579 request.go:629] Waited for 173.320136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:41:48.077803   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:41:48.077814   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:48.077823   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:48.077830   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:48.085436   22579 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0528 20:41:48.091810   22579 system_pods.go:59] 24 kube-system pods found
	I0528 20:41:48.091834   22579 system_pods.go:61] "coredns-7db6d8ff4d-5fmns" [41a3bda1-29ba-4982-baf5-0adc97b4eb45] Running
	I0528 20:41:48.091839   22579 system_pods.go:61] "coredns-7db6d8ff4d-mvx67" [0b51beb7-0397-4008-b878-97edd41c6b94] Running
	I0528 20:41:48.091843   22579 system_pods.go:61] "etcd-ha-908878" [4cfaba35-0bd9-476b-95c2-abd111c4fcac] Running
	I0528 20:41:48.091847   22579 system_pods.go:61] "etcd-ha-908878-m02" [cb4f24be-dbf9-4c42-9a55-29cf6f0b6ecc] Running
	I0528 20:41:48.091850   22579 system_pods.go:61] "etcd-ha-908878-m03" [e38e6404-063e-4b01-8079-395f96aa2036] Running
	I0528 20:41:48.091853   22579 system_pods.go:61] "kindnet-6prxw" [77fae8b9-3abd-4a39-81ec-cc782b891331] Running
	I0528 20:41:48.091856   22579 system_pods.go:61] "kindnet-fx2nj" [9d024f44-b6fe-4390-8b26-2f29f4fd5cdf] Running
	I0528 20:41:48.091859   22579 system_pods.go:61] "kindnet-x4mzh" [8069a7ea-0ab1-4064-b982-867dbdfd97aa] Running
	I0528 20:41:48.091862   22579 system_pods.go:61] "kube-apiserver-ha-908878" [ff63f2af-3fc5-496c-b468-7447defad5e6] Running
	I0528 20:41:48.091866   22579 system_pods.go:61] "kube-apiserver-ha-908878-m02" [3a56592b-67cd-44d0-8907-2a62d4a6c671] Running
	I0528 20:41:48.091869   22579 system_pods.go:61] "kube-apiserver-ha-908878-m03" [3b396a1d-9d28-469b-bddf-3a208c197207] Running
	I0528 20:41:48.091872   22579 system_pods.go:61] "kube-controller-manager-ha-908878" [e426060f-307d-41c7-8fb9-ab48709ce2a8] Running
	I0528 20:41:48.091876   22579 system_pods.go:61] "kube-controller-manager-ha-908878-m02" [232c3f41-5ba8-4fdf-848a-f8fb92f33a73] Running
	I0528 20:41:48.091879   22579 system_pods.go:61] "kube-controller-manager-ha-908878-m03" [43b1b03f-a6b5-4de9-afeb-6f488f3bd89e] Running
	I0528 20:41:48.091882   22579 system_pods.go:61] "kube-proxy-4vjp6" [142b5612-0c6b-4aa8-9410-646f2e2812bc] Running
	I0528 20:41:48.091885   22579 system_pods.go:61] "kube-proxy-ng8mq" [ca0b1264-09c7-44b2-ba8c-e145e825fdbe] Running
	I0528 20:41:48.091888   22579 system_pods.go:61] "kube-proxy-pg89k" [6eeda2cd-7b9e-440f-a8c3-c2ea8015106d] Running
	I0528 20:41:48.091891   22579 system_pods.go:61] "kube-scheduler-ha-908878" [7a9859a9-e92c-435b-a70e-5200f67d9589] Running
	I0528 20:41:48.091895   22579 system_pods.go:61] "kube-scheduler-ha-908878-m02" [c03b5557-cdca-4d39-800e-51a3a4f180b7] Running
	I0528 20:41:48.091898   22579 system_pods.go:61] "kube-scheduler-ha-908878-m03" [4699c008-ffdd-447b-a1b1-dc7776b60190] Running
	I0528 20:41:48.091901   22579 system_pods.go:61] "kube-vip-ha-908878" [45e2e6e2-6ea1-4498-aca1-9c3060eb9ca4] Running
	I0528 20:41:48.091904   22579 system_pods.go:61] "kube-vip-ha-908878-m02" [bcbc54fb-d0d4-422a-9e42-d61cd3f390ff] Running
	I0528 20:41:48.091911   22579 system_pods.go:61] "kube-vip-ha-908878-m03" [f1de9ce4-67d2-47ab-8a24-6766c35a73b9] Running
	I0528 20:41:48.091915   22579 system_pods.go:61] "storage-provisioner" [d79872e2-b267-446a-99dc-5bf9f398d31c] Running
	I0528 20:41:48.091920   22579 system_pods.go:74] duration metric: took 187.608951ms to wait for pod list to return data ...
	I0528 20:41:48.091934   22579 default_sa.go:34] waiting for default service account to be created ...
	I0528 20:41:48.278338   22579 request.go:629] Waited for 186.34176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0528 20:41:48.278399   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0528 20:41:48.278412   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:48.278423   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:48.278432   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:48.282323   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:48.282422   22579 default_sa.go:45] found service account: "default"
	I0528 20:41:48.282434   22579 default_sa.go:55] duration metric: took 190.495296ms for default service account to be created ...
	I0528 20:41:48.282442   22579 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 20:41:48.477831   22579 request.go:629] Waited for 195.307744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:41:48.477891   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0528 20:41:48.477896   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:48.477906   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:48.477911   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:48.488660   22579 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0528 20:41:48.494939   22579 system_pods.go:86] 24 kube-system pods found
	I0528 20:41:48.494964   22579 system_pods.go:89] "coredns-7db6d8ff4d-5fmns" [41a3bda1-29ba-4982-baf5-0adc97b4eb45] Running
	I0528 20:41:48.494972   22579 system_pods.go:89] "coredns-7db6d8ff4d-mvx67" [0b51beb7-0397-4008-b878-97edd41c6b94] Running
	I0528 20:41:48.494979   22579 system_pods.go:89] "etcd-ha-908878" [4cfaba35-0bd9-476b-95c2-abd111c4fcac] Running
	I0528 20:41:48.494985   22579 system_pods.go:89] "etcd-ha-908878-m02" [cb4f24be-dbf9-4c42-9a55-29cf6f0b6ecc] Running
	I0528 20:41:48.494995   22579 system_pods.go:89] "etcd-ha-908878-m03" [e38e6404-063e-4b01-8079-395f96aa2036] Running
	I0528 20:41:48.495001   22579 system_pods.go:89] "kindnet-6prxw" [77fae8b9-3abd-4a39-81ec-cc782b891331] Running
	I0528 20:41:48.495007   22579 system_pods.go:89] "kindnet-fx2nj" [9d024f44-b6fe-4390-8b26-2f29f4fd5cdf] Running
	I0528 20:41:48.495014   22579 system_pods.go:89] "kindnet-x4mzh" [8069a7ea-0ab1-4064-b982-867dbdfd97aa] Running
	I0528 20:41:48.495027   22579 system_pods.go:89] "kube-apiserver-ha-908878" [ff63f2af-3fc5-496c-b468-7447defad5e6] Running
	I0528 20:41:48.495042   22579 system_pods.go:89] "kube-apiserver-ha-908878-m02" [3a56592b-67cd-44d0-8907-2a62d4a6c671] Running
	I0528 20:41:48.495048   22579 system_pods.go:89] "kube-apiserver-ha-908878-m03" [3b396a1d-9d28-469b-bddf-3a208c197207] Running
	I0528 20:41:48.495056   22579 system_pods.go:89] "kube-controller-manager-ha-908878" [e426060f-307d-41c7-8fb9-ab48709ce2a8] Running
	I0528 20:41:48.495065   22579 system_pods.go:89] "kube-controller-manager-ha-908878-m02" [232c3f41-5ba8-4fdf-848a-f8fb92f33a73] Running
	I0528 20:41:48.495077   22579 system_pods.go:89] "kube-controller-manager-ha-908878-m03" [43b1b03f-a6b5-4de9-afeb-6f488f3bd89e] Running
	I0528 20:41:48.495084   22579 system_pods.go:89] "kube-proxy-4vjp6" [142b5612-0c6b-4aa8-9410-646f2e2812bc] Running
	I0528 20:41:48.495094   22579 system_pods.go:89] "kube-proxy-ng8mq" [ca0b1264-09c7-44b2-ba8c-e145e825fdbe] Running
	I0528 20:41:48.495101   22579 system_pods.go:89] "kube-proxy-pg89k" [6eeda2cd-7b9e-440f-a8c3-c2ea8015106d] Running
	I0528 20:41:48.495111   22579 system_pods.go:89] "kube-scheduler-ha-908878" [7a9859a9-e92c-435b-a70e-5200f67d9589] Running
	I0528 20:41:48.495119   22579 system_pods.go:89] "kube-scheduler-ha-908878-m02" [c03b5557-cdca-4d39-800e-51a3a4f180b7] Running
	I0528 20:41:48.495129   22579 system_pods.go:89] "kube-scheduler-ha-908878-m03" [4699c008-ffdd-447b-a1b1-dc7776b60190] Running
	I0528 20:41:48.495136   22579 system_pods.go:89] "kube-vip-ha-908878" [45e2e6e2-6ea1-4498-aca1-9c3060eb9ca4] Running
	I0528 20:41:48.495145   22579 system_pods.go:89] "kube-vip-ha-908878-m02" [bcbc54fb-d0d4-422a-9e42-d61cd3f390ff] Running
	I0528 20:41:48.495152   22579 system_pods.go:89] "kube-vip-ha-908878-m03" [f1de9ce4-67d2-47ab-8a24-6766c35a73b9] Running
	I0528 20:41:48.495161   22579 system_pods.go:89] "storage-provisioner" [d79872e2-b267-446a-99dc-5bf9f398d31c] Running
	I0528 20:41:48.495171   22579 system_pods.go:126] duration metric: took 212.720492ms to wait for k8s-apps to be running ...
	I0528 20:41:48.495183   22579 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 20:41:48.495230   22579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:41:48.512049   22579 system_svc.go:56] duration metric: took 16.837316ms WaitForService to wait for kubelet
	I0528 20:41:48.512080   22579 kubeadm.go:576] duration metric: took 20.231804569s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:41:48.512097   22579 node_conditions.go:102] verifying NodePressure condition ...
	I0528 20:41:48.677717   22579 request.go:629] Waited for 165.458182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes
	I0528 20:41:48.677788   22579 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes
	I0528 20:41:48.677796   22579 round_trippers.go:469] Request Headers:
	I0528 20:41:48.677806   22579 round_trippers.go:473]     Accept: application/json, */*
	I0528 20:41:48.677812   22579 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0528 20:41:48.681329   22579 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0528 20:41:48.682448   22579 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 20:41:48.682467   22579 node_conditions.go:123] node cpu capacity is 2
	I0528 20:41:48.682476   22579 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 20:41:48.682482   22579 node_conditions.go:123] node cpu capacity is 2
	I0528 20:41:48.682488   22579 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 20:41:48.682495   22579 node_conditions.go:123] node cpu capacity is 2
	I0528 20:41:48.682500   22579 node_conditions.go:105] duration metric: took 170.39831ms to run NodePressure ...
	I0528 20:41:48.682517   22579 start.go:240] waiting for startup goroutines ...
	I0528 20:41:48.682538   22579 start.go:254] writing updated cluster config ...
	I0528 20:41:48.682825   22579 ssh_runner.go:195] Run: rm -f paused
	I0528 20:41:48.732334   22579 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 20:41:48.734539   22579 out.go:177] * Done! kubectl is now configured to use "ha-908878" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.242290206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929180242264631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=584fa570-ebf6-4736-9245-040bb989667b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.242855974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=609b673e-7a4c-4bcc-a4cb-e63252f88534 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.242947795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=609b673e-7a4c-4bcc-a4cb-e63252f88534 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.243166631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716928912917476588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766590690596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6fe231fc7dbce25d01bf248b4e363bd410d67b785874c03d67b9895b05cd8d,PodSandboxId:0e94953284a5e4d09d285560204b96d126960c1c22367047d92a0697893879af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716928766576204824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766572067060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29
ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ea51bf984916bead29c664359f65ea790360f73bf15486bdf96c758189bd69,PodSandboxId:1f695b783edb95cab72476e5f23428dad45f722dd44cbb0bff30bab6aa207223,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1716928765126540501,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171692876
1367490977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20cf414ed60510910dc01761f0afc869180c224d4e26872a468b10e603e8e786,PodSandboxId:9d1408565bd5163dd277d755c852f8d09b92ff4f0ac886493b78b17bc70e95f6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17169287444
23802201,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10c65935005aeeb3bc67f128e502ec57,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716928741087996451,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716928740991369604,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aece72d9b21aa208ac4ce8512a57af63132d5882a8675650a980fa6b531af247,PodSandboxId:815ef28c8c10574c11bd2dce9a1acf1d7bfbf4859f7c59b844307688bca34a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716928741054454839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f926e075722f1f9cbd1b9f6559baa2071c1f539f729ddcb3572a87f64a8934e9,PodSandboxId:ce2508233e4b37815baef24981bbc12636f48bcc8015076d16dce0f2de38f726,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716928740948619502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=609b673e-7a4c-4bcc-a4cb-e63252f88534 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.283840532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cffbdc17-7797-4c5a-b2a6-eff95015c7c9 name=/runtime.v1.RuntimeService/Version
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.284025529Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cffbdc17-7797-4c5a-b2a6-eff95015c7c9 name=/runtime.v1.RuntimeService/Version
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.285257886Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6574a3a8-2829-437a-b453-7afbf20adfd3 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.285728029Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929180285705343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6574a3a8-2829-437a-b453-7afbf20adfd3 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.286374510Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=531b2865-4416-45f7-ba4e-5af3de3a8d1f name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.286452197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=531b2865-4416-45f7-ba4e-5af3de3a8d1f name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.286683156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716928912917476588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766590690596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6fe231fc7dbce25d01bf248b4e363bd410d67b785874c03d67b9895b05cd8d,PodSandboxId:0e94953284a5e4d09d285560204b96d126960c1c22367047d92a0697893879af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716928766576204824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766572067060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29
ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ea51bf984916bead29c664359f65ea790360f73bf15486bdf96c758189bd69,PodSandboxId:1f695b783edb95cab72476e5f23428dad45f722dd44cbb0bff30bab6aa207223,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1716928765126540501,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171692876
1367490977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20cf414ed60510910dc01761f0afc869180c224d4e26872a468b10e603e8e786,PodSandboxId:9d1408565bd5163dd277d755c852f8d09b92ff4f0ac886493b78b17bc70e95f6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17169287444
23802201,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10c65935005aeeb3bc67f128e502ec57,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716928741087996451,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716928740991369604,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aece72d9b21aa208ac4ce8512a57af63132d5882a8675650a980fa6b531af247,PodSandboxId:815ef28c8c10574c11bd2dce9a1acf1d7bfbf4859f7c59b844307688bca34a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716928741054454839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f926e075722f1f9cbd1b9f6559baa2071c1f539f729ddcb3572a87f64a8934e9,PodSandboxId:ce2508233e4b37815baef24981bbc12636f48bcc8015076d16dce0f2de38f726,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716928740948619502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=531b2865-4416-45f7-ba4e-5af3de3a8d1f name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.344226915Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffcea3ed-d0f1-4e18-a473-1f85e2edfabf name=/runtime.v1.RuntimeService/Version
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.344364275Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffcea3ed-d0f1-4e18-a473-1f85e2edfabf name=/runtime.v1.RuntimeService/Version
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.346224710Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98ca7a93-8791-400c-adea-97a82fde479e name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.346738998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929180346711064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98ca7a93-8791-400c-adea-97a82fde479e name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.347307755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d9e76ef-9121-4f75-ae22-a1160eaecd3f name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.347368111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d9e76ef-9121-4f75-ae22-a1160eaecd3f name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.347610365Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716928912917476588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766590690596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6fe231fc7dbce25d01bf248b4e363bd410d67b785874c03d67b9895b05cd8d,PodSandboxId:0e94953284a5e4d09d285560204b96d126960c1c22367047d92a0697893879af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716928766576204824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766572067060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29
ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ea51bf984916bead29c664359f65ea790360f73bf15486bdf96c758189bd69,PodSandboxId:1f695b783edb95cab72476e5f23428dad45f722dd44cbb0bff30bab6aa207223,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1716928765126540501,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171692876
1367490977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20cf414ed60510910dc01761f0afc869180c224d4e26872a468b10e603e8e786,PodSandboxId:9d1408565bd5163dd277d755c852f8d09b92ff4f0ac886493b78b17bc70e95f6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17169287444
23802201,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10c65935005aeeb3bc67f128e502ec57,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716928741087996451,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716928740991369604,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aece72d9b21aa208ac4ce8512a57af63132d5882a8675650a980fa6b531af247,PodSandboxId:815ef28c8c10574c11bd2dce9a1acf1d7bfbf4859f7c59b844307688bca34a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716928741054454839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f926e075722f1f9cbd1b9f6559baa2071c1f539f729ddcb3572a87f64a8934e9,PodSandboxId:ce2508233e4b37815baef24981bbc12636f48bcc8015076d16dce0f2de38f726,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716928740948619502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d9e76ef-9121-4f75-ae22-a1160eaecd3f name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.388519425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50cf4555-2bee-493c-a953-e5266bf7c392 name=/runtime.v1.RuntimeService/Version
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.388616731Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50cf4555-2bee-493c-a953-e5266bf7c392 name=/runtime.v1.RuntimeService/Version
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.390128172Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ce39fa5-5340-4a50-bf73-01fca051ad79 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.390612817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929180390589998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ce39fa5-5340-4a50-bf73-01fca051ad79 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.391464061Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ccc4256-48b1-48a3-8a57-13742c2e16fd name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.391535607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ccc4256-48b1-48a3-8a57-13742c2e16fd name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:46:20 ha-908878 crio[681]: time="2024-05-28 20:46:20.391755956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716928912917476588,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766590690596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6fe231fc7dbce25d01bf248b4e363bd410d67b785874c03d67b9895b05cd8d,PodSandboxId:0e94953284a5e4d09d285560204b96d126960c1c22367047d92a0697893879af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716928766576204824,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716928766572067060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29
ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ea51bf984916bead29c664359f65ea790360f73bf15486bdf96c758189bd69,PodSandboxId:1f695b783edb95cab72476e5f23428dad45f722dd44cbb0bff30bab6aa207223,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1716928765126540501,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171692876
1367490977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20cf414ed60510910dc01761f0afc869180c224d4e26872a468b10e603e8e786,PodSandboxId:9d1408565bd5163dd277d755c852f8d09b92ff4f0ac886493b78b17bc70e95f6,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17169287444
23802201,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10c65935005aeeb3bc67f128e502ec57,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716928741087996451,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716928740991369604,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aece72d9b21aa208ac4ce8512a57af63132d5882a8675650a980fa6b531af247,PodSandboxId:815ef28c8c10574c11bd2dce9a1acf1d7bfbf4859f7c59b844307688bca34a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716928741054454839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f926e075722f1f9cbd1b9f6559baa2071c1f539f729ddcb3572a87f64a8934e9,PodSandboxId:ce2508233e4b37815baef24981bbc12636f48bcc8015076d16dce0f2de38f726,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716928740948619502,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ccc4256-48b1-48a3-8a57-13742c2e16fd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	92c83dd481e56       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   dfbac4c22bc27       busybox-fc5497c4f-ljbzs
	7c38e07fa546e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   fb8a83ba500b4       coredns-7db6d8ff4d-mvx67
	0b6fe231fc7db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   0e94953284a5e       storage-provisioner
	2470320e3bec5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   5333c6894c446       coredns-7db6d8ff4d-5fmns
	a7ea51bf98491       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    6 minutes ago       Running             kindnet-cni               0                   1f695b783edb9       kindnet-x4mzh
	97ba5f2725852       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      6 minutes ago       Running             kube-proxy                0                   2a5f076d2569c       kube-proxy-ng8mq
	20cf414ed6051       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   9d1408565bd51       kube-vip-ha-908878
	05d5882852e6e       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago       Running             kube-scheduler            0                   54beb07b658e5       kube-scheduler-ha-908878
	aece72d9b21aa       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago       Running             kube-controller-manager   0                   815ef28c8c105       kube-controller-manager-ha-908878
	650c6f374c3b3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   232d528c76896       etcd-ha-908878
	f926e075722f1       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago       Running             kube-apiserver            0                   ce2508233e4b3       kube-apiserver-ha-908878
	
	
	==> coredns [2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9] <==
	[INFO] 10.244.1.2:56205 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000131203s
	[INFO] 10.244.1.2:38624 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000093706s
	[INFO] 10.244.2.2:58947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117263s
	[INFO] 10.244.2.2:42241 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004799735s
	[INFO] 10.244.2.2:34187 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000308919s
	[INFO] 10.244.2.2:41613 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002489251s
	[INFO] 10.244.2.2:55408 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147549s
	[INFO] 10.244.0.4:57170 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000374705s
	[INFO] 10.244.0.4:58966 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155963s
	[INFO] 10.244.0.4:35423 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111865s
	[INFO] 10.244.1.2:37835 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079714s
	[INFO] 10.244.1.2:45922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128914s
	[INFO] 10.244.2.2:49120 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102234s
	[INFO] 10.244.2.2:59817 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113316s
	[INFO] 10.244.1.2:33990 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104132s
	[INFO] 10.244.1.2:57343 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065665s
	[INFO] 10.244.1.2:37008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144249s
	[INFO] 10.244.2.2:57641 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201576s
	[INFO] 10.244.0.4:55430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016202s
	[INFO] 10.244.0.4:58197 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154574s
	[INFO] 10.244.0.4:43002 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159971s
	[INFO] 10.244.1.2:33008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159565s
	[INFO] 10.244.1.2:55799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106231s
	[INFO] 10.244.1.2:34935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119985s
	[INFO] 10.244.1.2:55524 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077247s
	
	
	==> coredns [7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6] <==
	[INFO] 10.244.2.2:34220 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181008s
	[INFO] 10.244.2.2:45561 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000220146s
	[INFO] 10.244.2.2:58602 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170027s
	[INFO] 10.244.0.4:43029 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001811296s
	[INFO] 10.244.0.4:49612 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098819s
	[INFO] 10.244.0.4:33728 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000042492s
	[INFO] 10.244.0.4:34284 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001158314s
	[INFO] 10.244.0.4:52540 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045508s
	[INFO] 10.244.1.2:36534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139592s
	[INFO] 10.244.1.2:55059 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00181265s
	[INFO] 10.244.1.2:57133 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001147785s
	[INFO] 10.244.1.2:59156 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008284s
	[INFO] 10.244.1.2:56011 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189969s
	[INFO] 10.244.1.2:57157 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076075s
	[INFO] 10.244.2.2:38176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112538s
	[INFO] 10.244.2.2:54457 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111343s
	[INFO] 10.244.0.4:46728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104994s
	[INFO] 10.244.0.4:49514 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077463s
	[INFO] 10.244.0.4:40805 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103396s
	[INFO] 10.244.0.4:41445 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093035s
	[INFO] 10.244.1.2:48615 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169745s
	[INFO] 10.244.2.2:39740 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00022698s
	[INFO] 10.244.2.2:42139 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182159s
	[INFO] 10.244.2.2:54665 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00035602s
	[INFO] 10.244.0.4:33063 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104255s
	
	
	==> describe nodes <==
	Name:               ha-908878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T20_39_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:46:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:42:10 +0000   Tue, 28 May 2024 20:39:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:42:10 +0000   Tue, 28 May 2024 20:39:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:42:10 +0000   Tue, 28 May 2024 20:39:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:42:10 +0000   Tue, 28 May 2024 20:39:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-908878
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a470f4bebd094a03b2a08db3a205d097
	  System UUID:                a470f4be-bd09-4a03-b2a0-8db3a205d097
	  Boot ID:                    e5dc2485-8c44-4c4f-899c-7eb02750525b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ljbzs              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 coredns-7db6d8ff4d-5fmns             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m
	  kube-system                 coredns-7db6d8ff4d-mvx67             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m
	  kube-system                 etcd-ha-908878                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m13s
	  kube-system                 kindnet-x4mzh                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m1s
	  kube-system                 kube-apiserver-ha-908878             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 kube-controller-manager-ha-908878    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 kube-proxy-ng8mq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m1s
	  kube-system                 kube-scheduler-ha-908878             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	  kube-system                 kube-vip-ha-908878                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m59s  kube-proxy       
	  Normal  Starting                 7m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m13s  kubelet          Node ha-908878 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m13s  kubelet          Node ha-908878 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m13s  kubelet          Node ha-908878 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m1s   node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Normal  NodeReady                6m55s  kubelet          Node ha-908878 status is now: NodeReady
	  Normal  RegisteredNode           5m53s  node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Normal  RegisteredNode           4m38s  node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	
	
	Name:               ha-908878-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T20_40_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:40:09 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:42:53 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 28 May 2024 20:42:11 +0000   Tue, 28 May 2024 20:43:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 28 May 2024 20:42:11 +0000   Tue, 28 May 2024 20:43:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 28 May 2024 20:42:11 +0000   Tue, 28 May 2024 20:43:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 28 May 2024 20:42:11 +0000   Tue, 28 May 2024 20:43:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    ha-908878-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f91cea3af174de9a05db650e4662bbb
	  System UUID:                8f91cea3-af17-4de9-a05d-b650e4662bbb
	  Boot ID:                    b2eef028-5a7a-487d-9126-300ce051c010
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rfl74                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 etcd-ha-908878-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m9s
	  kube-system                 kindnet-6prxw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m11s
	  kube-system                 kube-apiserver-ha-908878-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-controller-manager-ha-908878-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-proxy-pg89k                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 kube-scheduler-ha-908878-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-vip-ha-908878-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m11s (x8 over 6m11s)  kubelet          Node ha-908878-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m11s (x8 over 6m11s)  kubelet          Node ha-908878-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m11s (x7 over 6m11s)  kubelet          Node ha-908878-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m6s                   node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  RegisteredNode           5m53s                  node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  RegisteredNode           4m38s                  node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  NodeNotReady             2m46s                  node-controller  Node ha-908878-m02 status is now: NodeNotReady
	
	
	Name:               ha-908878-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T20_41_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:41:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:46:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:41:55 +0000   Tue, 28 May 2024 20:41:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:41:55 +0000   Tue, 28 May 2024 20:41:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:41:55 +0000   Tue, 28 May 2024 20:41:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:41:55 +0000   Tue, 28 May 2024 20:41:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    ha-908878-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2e3e9f9367694cccab6cb31074c7abc1
	  System UUID:                2e3e9f93-6769-4ccc-ab6c-b31074c7abc1
	  Boot ID:                    db2680cb-6e23-43c2-b2b5-a7f2a2d62f5c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ldbfj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 etcd-ha-908878-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m54s
	  kube-system                 kindnet-fx2nj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m56s
	  kube-system                 kube-apiserver-ha-908878-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-controller-manager-ha-908878-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-proxy-4vjp6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-scheduler-ha-908878-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-vip-ha-908878-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m51s                  kube-proxy       
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-908878-m03 event: Registered Node ha-908878-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m56s)  kubelet          Node ha-908878-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m56s)  kubelet          Node ha-908878-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m56s)  kubelet          Node ha-908878-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-908878-m03 event: Registered Node ha-908878-m03 in Controller
	  Normal  RegisteredNode           4m38s                  node-controller  Node ha-908878-m03 event: Registered Node ha-908878-m03 in Controller
	
	
	Name:               ha-908878-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T20_42_26_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:42:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:46:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:42:56 +0000   Tue, 28 May 2024 20:42:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:42:56 +0000   Tue, 28 May 2024 20:42:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:42:56 +0000   Tue, 28 May 2024 20:42:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:42:56 +0000   Tue, 28 May 2024 20:42:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    ha-908878-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3c86941732a4e078803ce72d6cca1eb
	  System UUID:                f3c86941-732a-4e07-8803-ce72d6cca1eb
	  Boot ID:                    3305d0dc-4089-4a56-838a-9e99a8e74f80
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-68kxq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m55s
	  kube-system                 kube-proxy-bnh2w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m55s (x2 over 3m55s)  kubelet          Node ha-908878-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s (x2 over 3m55s)  kubelet          Node ha-908878-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s (x2 over 3m55s)  kubelet          Node ha-908878-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal  NodeReady                3m45s                  kubelet          Node ha-908878-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May28 20:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050785] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040005] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.504289] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.190103] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.581513] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.578430] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.054216] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052934] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.180850] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.119729] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.261744] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.070195] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +5.007183] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[  +0.062643] kauditd_printk_skb: 158 callbacks suppressed
	[May28 20:39] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.085155] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.532403] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.860818] kauditd_printk_skb: 38 callbacks suppressed
	[May28 20:40] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14] <==
	{"level":"warn","ts":"2024-05-28T20:46:20.700156Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.703099Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.711167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.717622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.73674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.747016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.757222Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.765225Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.76943Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.778053Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.78477Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.790753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.794425Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.797534Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.802816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.806481Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.815005Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.82455Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.828416Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.832029Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.838588Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.845363Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.85176Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.903672Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-28T20:46:20.920667Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:46:20 up 7 min,  0 users,  load average: 0.12, 0.23, 0.12
	Linux ha-908878 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a7ea51bf984916bead29c664359f65ea790360f73bf15486bdf96c758189bd69] <==
	I0528 20:45:46.262099       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	I0528 20:45:56.270201       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0528 20:45:56.270255       1 main.go:227] handling current node
	I0528 20:45:56.270269       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0528 20:45:56.270274       1 main.go:250] Node ha-908878-m02 has CIDR [10.244.1.0/24] 
	I0528 20:45:56.270436       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0528 20:45:56.270458       1 main.go:250] Node ha-908878-m03 has CIDR [10.244.2.0/24] 
	I0528 20:45:56.270507       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0528 20:45:56.270539       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	I0528 20:46:06.287601       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0528 20:46:06.287694       1 main.go:227] handling current node
	I0528 20:46:06.287736       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0528 20:46:06.287744       1 main.go:250] Node ha-908878-m02 has CIDR [10.244.1.0/24] 
	I0528 20:46:06.288044       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0528 20:46:06.288209       1 main.go:250] Node ha-908878-m03 has CIDR [10.244.2.0/24] 
	I0528 20:46:06.288425       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0528 20:46:06.288465       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	I0528 20:46:16.306364       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0528 20:46:16.306406       1 main.go:227] handling current node
	I0528 20:46:16.306418       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0528 20:46:16.306423       1 main.go:250] Node ha-908878-m02 has CIDR [10.244.1.0/24] 
	I0528 20:46:16.306576       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0528 20:46:16.306621       1 main.go:250] Node ha-908878-m03 has CIDR [10.244.2.0/24] 
	I0528 20:46:16.306684       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0528 20:46:16.306706       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f926e075722f1f9cbd1b9f6559baa2071c1f539f729ddcb3572a87f64a8934e9] <==
	I0528 20:39:07.236082       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0528 20:39:07.266129       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0528 20:39:07.285274       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0528 20:39:19.318316       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0528 20:39:20.069050       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0528 20:40:10.266455       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0528 20:40:10.266701       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0528 20:40:10.266746       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.839µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0528 20:40:10.268007       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0528 20:40:10.268121       1 timeout.go:142] post-timeout activity - time-elapsed: 1.756296ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0528 20:41:54.413583       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55066: use of closed network connection
	E0528 20:41:54.619077       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55080: use of closed network connection
	E0528 20:41:54.803730       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55088: use of closed network connection
	E0528 20:41:55.011752       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55114: use of closed network connection
	E0528 20:41:55.211373       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55138: use of closed network connection
	E0528 20:41:55.402622       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55162: use of closed network connection
	E0528 20:41:55.576393       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55190: use of closed network connection
	E0528 20:41:55.790287       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55218: use of closed network connection
	E0528 20:41:55.969182       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55238: use of closed network connection
	E0528 20:41:56.277128       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55268: use of closed network connection
	E0528 20:41:56.445213       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55290: use of closed network connection
	E0528 20:41:56.630184       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55314: use of closed network connection
	E0528 20:41:56.817823       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55326: use of closed network connection
	E0528 20:41:56.990599       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55348: use of closed network connection
	E0528 20:41:57.171180       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55372: use of closed network connection
	
	
	==> kube-controller-manager [aece72d9b21aa208ac4ce8512a57af63132d5882a8675650a980fa6b531af247] <==
	I0528 20:41:24.352789       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-908878-m03"
	I0528 20:41:49.620415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.896976ms"
	I0528 20:41:49.662578       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.071865ms"
	I0528 20:41:49.665186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.545967ms"
	I0528 20:41:49.665448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.789µs"
	I0528 20:41:49.788653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.673974ms"
	I0528 20:41:49.959149       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="169.158521ms"
	I0528 20:41:50.024107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.881844ms"
	I0528 20:41:50.068496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.340615ms"
	I0528 20:41:50.068724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.248µs"
	I0528 20:41:53.410659       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.275506ms"
	I0528 20:41:53.410947       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.061µs"
	I0528 20:41:53.838999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.775079ms"
	I0528 20:41:53.839177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.576µs"
	I0528 20:41:53.960484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.291459ms"
	I0528 20:41:53.960696       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.482µs"
	E0528 20:42:25.215787       1 certificate_controller.go:146] Sync csr-wnzwn failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-wnzwn": the object has been modified; please apply your changes to the latest version and try again
	E0528 20:42:25.235546       1 certificate_controller.go:146] Sync csr-wnzwn failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-wnzwn": the object has been modified; please apply your changes to the latest version and try again
	I0528 20:42:25.529760       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-908878-m04\" does not exist"
	I0528 20:42:25.556939       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-908878-m04" podCIDRs=["10.244.3.0/24"]
	I0528 20:42:29.382657       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-908878-m04"
	I0528 20:42:35.936469       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-908878-m04"
	I0528 20:43:34.405211       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-908878-m04"
	I0528 20:43:34.458744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.320257ms"
	I0528 20:43:34.459114       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.898µs"
	
	
	==> kube-proxy [97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe] <==
	I0528 20:39:21.545470       1 server_linux.go:69] "Using iptables proxy"
	I0528 20:39:21.569641       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0528 20:39:21.631409       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 20:39:21.631495       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 20:39:21.631512       1 server_linux.go:165] "Using iptables Proxier"
	I0528 20:39:21.634617       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 20:39:21.635082       1 server.go:872] "Version info" version="v1.30.1"
	I0528 20:39:21.635116       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:39:21.636675       1 config.go:192] "Starting service config controller"
	I0528 20:39:21.636707       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 20:39:21.636737       1 config.go:101] "Starting endpoint slice config controller"
	I0528 20:39:21.636758       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 20:39:21.637418       1 config.go:319] "Starting node config controller"
	I0528 20:39:21.637446       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 20:39:21.737927       1 shared_informer.go:320] Caches are synced for node config
	I0528 20:39:21.737972       1 shared_informer.go:320] Caches are synced for service config
	I0528 20:39:21.738008       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9] <==
	W0528 20:39:05.084846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 20:39:05.084955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0528 20:39:05.100133       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 20:39:05.100220       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0528 20:39:05.182255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 20:39:05.182542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 20:39:05.219676       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 20:39:05.219802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 20:39:05.336519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0528 20:39:05.336613       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0528 20:39:05.349682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 20:39:05.350132       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 20:39:05.355219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0528 20:39:05.355300       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0528 20:39:05.750699       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0528 20:41:49.620623       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-ldbfj\": pod busybox-fc5497c4f-ldbfj is already assigned to node \"ha-908878-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-ldbfj" node="ha-908878-m03"
	E0528 20:41:49.621210       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 28610a08-d992-429e-8480-d957b325ccbd(default/busybox-fc5497c4f-ldbfj) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-ldbfj"
	E0528 20:41:49.621549       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-ldbfj\": pod busybox-fc5497c4f-ldbfj is already assigned to node \"ha-908878-m03\"" pod="default/busybox-fc5497c4f-ldbfj"
	I0528 20:41:49.621644       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-ldbfj" node="ha-908878-m03"
	E0528 20:41:49.620767       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-ljbzs\": pod busybox-fc5497c4f-ljbzs is already assigned to node \"ha-908878\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-ljbzs" node="ha-908878"
	E0528 20:41:49.628536       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3a49d7b7-d8ae-44a8-8393-51781cf73591(default/busybox-fc5497c4f-ljbzs) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-ljbzs"
	E0528 20:41:49.628562       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-ljbzs\": pod busybox-fc5497c4f-ljbzs is already assigned to node \"ha-908878\"" pod="default/busybox-fc5497c4f-ljbzs"
	I0528 20:41:49.628589       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-ljbzs" node="ha-908878"
	E0528 20:42:25.645571       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-68kxq\": pod kindnet-68kxq is already assigned to node \"ha-908878-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-68kxq" node="ha-908878-m04"
	E0528 20:42:25.646180       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-68kxq\": pod kindnet-68kxq is already assigned to node \"ha-908878-m04\"" pod="kube-system/kindnet-68kxq"
	
	
	==> kubelet <==
	May 28 20:42:07 ha-908878 kubelet[1380]: E0528 20:42:07.192347    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:42:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:42:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:42:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:42:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:43:07 ha-908878 kubelet[1380]: E0528 20:43:07.191954    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:43:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:43:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:43:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:43:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:44:07 ha-908878 kubelet[1380]: E0528 20:44:07.196222    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:44:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:44:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:44:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:44:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:45:07 ha-908878 kubelet[1380]: E0528 20:45:07.189196    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:45:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:45:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:45:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:45:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:46:07 ha-908878 kubelet[1380]: E0528 20:46:07.193822    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:46:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:46:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:46:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:46:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-908878 -n ha-908878
helpers_test.go:261: (dbg) Run:  kubectl --context ha-908878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (60.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (362.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-908878 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-908878 -v=7 --alsologtostderr
E0528 20:47:37.451222   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:48:05.135358   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-908878 -v=7 --alsologtostderr: exit status 82 (2m1.898709865s)

                                                
                                                
-- stdout --
	* Stopping node "ha-908878-m04"  ...
	* Stopping node "ha-908878-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:46:22.302971   28378 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:46:22.303181   28378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:46:22.303191   28378 out.go:304] Setting ErrFile to fd 2...
	I0528 20:46:22.303196   28378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:46:22.303441   28378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:46:22.303659   28378 out.go:298] Setting JSON to false
	I0528 20:46:22.303746   28378 mustload.go:65] Loading cluster: ha-908878
	I0528 20:46:22.304102   28378 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:46:22.304181   28378 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:46:22.304350   28378 mustload.go:65] Loading cluster: ha-908878
	I0528 20:46:22.304471   28378 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:46:22.304494   28378 stop.go:39] StopHost: ha-908878-m04
	I0528 20:46:22.304853   28378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:22.304893   28378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:22.320154   28378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I0528 20:46:22.320599   28378 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:22.321147   28378 main.go:141] libmachine: Using API Version  1
	I0528 20:46:22.321173   28378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:22.321529   28378 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:22.323982   28378 out.go:177] * Stopping node "ha-908878-m04"  ...
	I0528 20:46:22.325343   28378 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0528 20:46:22.325373   28378 main.go:141] libmachine: (ha-908878-m04) Calling .DriverName
	I0528 20:46:22.325595   28378 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0528 20:46:22.325626   28378 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHHostname
	I0528 20:46:22.328326   28378 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:46:22.328768   28378 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:42:12 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:46:22.328813   28378 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:46:22.328973   28378 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHPort
	I0528 20:46:22.329159   28378 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHKeyPath
	I0528 20:46:22.329317   28378 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHUsername
	I0528 20:46:22.329457   28378 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m04/id_rsa Username:docker}
	I0528 20:46:22.416144   28378 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0528 20:46:22.469457   28378 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0528 20:46:22.523199   28378 main.go:141] libmachine: Stopping "ha-908878-m04"...
	I0528 20:46:22.523241   28378 main.go:141] libmachine: (ha-908878-m04) Calling .GetState
	I0528 20:46:22.524617   28378 main.go:141] libmachine: (ha-908878-m04) Calling .Stop
	I0528 20:46:22.527744   28378 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 0/120
	I0528 20:46:23.753963   28378 main.go:141] libmachine: (ha-908878-m04) Calling .GetState
	I0528 20:46:23.755398   28378 main.go:141] libmachine: Machine "ha-908878-m04" was stopped.
	I0528 20:46:23.755413   28378 stop.go:75] duration metric: took 1.430082348s to stop
	I0528 20:46:23.755434   28378 stop.go:39] StopHost: ha-908878-m03
	I0528 20:46:23.755724   28378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:46:23.755767   28378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:46:23.770044   28378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41561
	I0528 20:46:23.770396   28378 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:46:23.770836   28378 main.go:141] libmachine: Using API Version  1
	I0528 20:46:23.770858   28378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:46:23.771202   28378 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:46:23.773150   28378 out.go:177] * Stopping node "ha-908878-m03"  ...
	I0528 20:46:23.774400   28378 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0528 20:46:23.774422   28378 main.go:141] libmachine: (ha-908878-m03) Calling .DriverName
	I0528 20:46:23.774614   28378 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0528 20:46:23.774637   28378 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHHostname
	I0528 20:46:23.777272   28378 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:46:23.777719   28378 main.go:141] libmachine: (ha-908878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:3d:20", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:40:46 +0000 UTC Type:0 Mac:52:54:00:92:3d:20 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-908878-m03 Clientid:01:52:54:00:92:3d:20}
	I0528 20:46:23.777777   28378 main.go:141] libmachine: (ha-908878-m03) DBG | domain ha-908878-m03 has defined IP address 192.168.39.73 and MAC address 52:54:00:92:3d:20 in network mk-ha-908878
	I0528 20:46:23.777871   28378 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHPort
	I0528 20:46:23.778047   28378 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHKeyPath
	I0528 20:46:23.778201   28378 main.go:141] libmachine: (ha-908878-m03) Calling .GetSSHUsername
	I0528 20:46:23.778352   28378 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m03/id_rsa Username:docker}
	I0528 20:46:23.865000   28378 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0528 20:46:23.918065   28378 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0528 20:46:23.972776   28378 main.go:141] libmachine: Stopping "ha-908878-m03"...
	I0528 20:46:23.972804   28378 main.go:141] libmachine: (ha-908878-m03) Calling .GetState
	I0528 20:46:23.974410   28378 main.go:141] libmachine: (ha-908878-m03) Calling .Stop
	I0528 20:46:23.977603   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 0/120
	I0528 20:46:24.978964   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 1/120
	I0528 20:46:25.980322   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 2/120
	I0528 20:46:26.981712   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 3/120
	I0528 20:46:27.983407   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 4/120
	I0528 20:46:28.985368   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 5/120
	I0528 20:46:29.987246   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 6/120
	I0528 20:46:30.988807   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 7/120
	I0528 20:46:31.990349   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 8/120
	I0528 20:46:32.992012   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 9/120
	I0528 20:46:33.994296   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 10/120
	I0528 20:46:34.995908   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 11/120
	I0528 20:46:35.997314   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 12/120
	I0528 20:46:36.998763   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 13/120
	I0528 20:46:38.000307   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 14/120
	I0528 20:46:39.002264   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 15/120
	I0528 20:46:40.004033   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 16/120
	I0528 20:46:41.005464   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 17/120
	I0528 20:46:42.006990   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 18/120
	I0528 20:46:43.008393   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 19/120
	I0528 20:46:44.010080   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 20/120
	I0528 20:46:45.011500   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 21/120
	I0528 20:46:46.012921   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 22/120
	I0528 20:46:47.014408   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 23/120
	I0528 20:46:48.015785   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 24/120
	I0528 20:46:49.017297   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 25/120
	I0528 20:46:50.018520   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 26/120
	I0528 20:46:51.019856   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 27/120
	I0528 20:46:52.020958   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 28/120
	I0528 20:46:53.022538   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 29/120
	I0528 20:46:54.024623   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 30/120
	I0528 20:46:55.025817   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 31/120
	I0528 20:46:56.027439   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 32/120
	I0528 20:46:57.028770   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 33/120
	I0528 20:46:58.030021   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 34/120
	I0528 20:46:59.031834   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 35/120
	I0528 20:47:00.033072   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 36/120
	I0528 20:47:01.034413   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 37/120
	I0528 20:47:02.035604   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 38/120
	I0528 20:47:03.036756   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 39/120
	I0528 20:47:04.038004   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 40/120
	I0528 20:47:05.040231   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 41/120
	I0528 20:47:06.041555   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 42/120
	I0528 20:47:07.043068   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 43/120
	I0528 20:47:08.044317   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 44/120
	I0528 20:47:09.046035   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 45/120
	I0528 20:47:10.048081   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 46/120
	I0528 20:47:11.049163   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 47/120
	I0528 20:47:12.050472   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 48/120
	I0528 20:47:13.052069   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 49/120
	I0528 20:47:14.053835   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 50/120
	I0528 20:47:15.055179   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 51/120
	I0528 20:47:16.056463   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 52/120
	I0528 20:47:17.057840   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 53/120
	I0528 20:47:18.059094   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 54/120
	I0528 20:47:19.060356   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 55/120
	I0528 20:47:20.061637   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 56/120
	I0528 20:47:21.062961   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 57/120
	I0528 20:47:22.064279   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 58/120
	I0528 20:47:23.065599   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 59/120
	I0528 20:47:24.067425   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 60/120
	I0528 20:47:25.068944   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 61/120
	I0528 20:47:26.070159   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 62/120
	I0528 20:47:27.071547   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 63/120
	I0528 20:47:28.072907   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 64/120
	I0528 20:47:29.074576   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 65/120
	I0528 20:47:30.075840   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 66/120
	I0528 20:47:31.077120   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 67/120
	I0528 20:47:32.078638   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 68/120
	I0528 20:47:33.080018   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 69/120
	I0528 20:47:34.081460   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 70/120
	I0528 20:47:35.082776   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 71/120
	I0528 20:47:36.084292   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 72/120
	I0528 20:47:37.085594   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 73/120
	I0528 20:47:38.087004   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 74/120
	I0528 20:47:39.088705   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 75/120
	I0528 20:47:40.089964   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 76/120
	I0528 20:47:41.091288   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 77/120
	I0528 20:47:42.092550   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 78/120
	I0528 20:47:43.093942   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 79/120
	I0528 20:47:44.095557   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 80/120
	I0528 20:47:45.096813   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 81/120
	I0528 20:47:46.098129   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 82/120
	I0528 20:47:47.099403   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 83/120
	I0528 20:47:48.100761   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 84/120
	I0528 20:47:49.102105   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 85/120
	I0528 20:47:50.103324   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 86/120
	I0528 20:47:51.104717   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 87/120
	I0528 20:47:52.106862   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 88/120
	I0528 20:47:53.108341   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 89/120
	I0528 20:47:54.110062   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 90/120
	I0528 20:47:55.111409   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 91/120
	I0528 20:47:56.112782   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 92/120
	I0528 20:47:57.114198   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 93/120
	I0528 20:47:58.115619   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 94/120
	I0528 20:47:59.117398   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 95/120
	I0528 20:48:00.118725   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 96/120
	I0528 20:48:01.119934   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 97/120
	I0528 20:48:02.121921   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 98/120
	I0528 20:48:03.123216   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 99/120
	I0528 20:48:04.124701   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 100/120
	I0528 20:48:05.126079   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 101/120
	I0528 20:48:06.127283   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 102/120
	I0528 20:48:07.128474   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 103/120
	I0528 20:48:08.130721   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 104/120
	I0528 20:48:09.132844   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 105/120
	I0528 20:48:10.134230   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 106/120
	I0528 20:48:11.136293   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 107/120
	I0528 20:48:12.137648   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 108/120
	I0528 20:48:13.139318   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 109/120
	I0528 20:48:14.141384   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 110/120
	I0528 20:48:15.142644   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 111/120
	I0528 20:48:16.144009   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 112/120
	I0528 20:48:17.145457   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 113/120
	I0528 20:48:18.146877   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 114/120
	I0528 20:48:19.148523   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 115/120
	I0528 20:48:20.149696   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 116/120
	I0528 20:48:21.150935   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 117/120
	I0528 20:48:22.152214   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 118/120
	I0528 20:48:23.153521   28378 main.go:141] libmachine: (ha-908878-m03) Waiting for machine to stop 119/120
	I0528 20:48:24.154366   28378 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0528 20:48:24.154407   28378 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0528 20:48:24.156314   28378 out.go:177] 
	W0528 20:48:24.157588   28378 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0528 20:48:24.157603   28378 out.go:239] * 
	* 
	W0528 20:48:24.159827   28378 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 20:48:24.161156   28378 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-908878 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-908878 --wait=true -v=7 --alsologtostderr
E0528 20:49:42.597913   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:51:05.644615   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-908878 --wait=true -v=7 --alsologtostderr: (3m57.58297766s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-908878
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-908878 -n ha-908878
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-908878 logs -n 25: (1.963961105s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m02:/home/docker/cp-test_ha-908878-m03_ha-908878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m02 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m03_ha-908878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04:/home/docker/cp-test_ha-908878-m03_ha-908878-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m04 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m03_ha-908878-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-908878 cp testdata/cp-test.txt                                                | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3657915045/001/cp-test_ha-908878-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878:/home/docker/cp-test_ha-908878-m04_ha-908878.txt                       |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878 sudo cat                                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m04_ha-908878.txt                                 |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m02:/home/docker/cp-test_ha-908878-m04_ha-908878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m02 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m04_ha-908878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03:/home/docker/cp-test_ha-908878-m04_ha-908878-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m03 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m04_ha-908878-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-908878 node stop m02 -v=7                                                     | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-908878 node start m02 -v=7                                                    | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-908878 -v=7                                                           | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-908878 -v=7                                                                | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-908878 --wait=true -v=7                                                    | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:48 UTC | 28 May 24 20:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-908878                                                                | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:52 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 20:48:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 20:48:24.204068   28866 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:48:24.204182   28866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:48:24.204192   28866 out.go:304] Setting ErrFile to fd 2...
	I0528 20:48:24.204197   28866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:48:24.204371   28866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:48:24.204894   28866 out.go:298] Setting JSON to false
	I0528 20:48:24.205878   28866 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1847,"bootTime":1716927457,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 20:48:24.205930   28866 start.go:139] virtualization: kvm guest
	I0528 20:48:24.208220   28866 out.go:177] * [ha-908878] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 20:48:24.209581   28866 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 20:48:24.210757   28866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 20:48:24.209558   28866 notify.go:220] Checking for updates...
	I0528 20:48:24.213081   28866 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:48:24.214389   28866 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:48:24.215613   28866 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 20:48:24.216762   28866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 20:48:24.218304   28866 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:48:24.218390   28866 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 20:48:24.218728   28866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:48:24.218768   28866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:48:24.235219   28866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I0528 20:48:24.235557   28866 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:48:24.236084   28866 main.go:141] libmachine: Using API Version  1
	I0528 20:48:24.236107   28866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:48:24.236505   28866 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:48:24.236707   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:48:24.270430   28866 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 20:48:24.271599   28866 start.go:297] selected driver: kvm2
	I0528 20:48:24.271612   28866 start.go:901] validating driver "kvm2" against &{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.38 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:48:24.271860   28866 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 20:48:24.272196   28866 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:48:24.272280   28866 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 20:48:24.286440   28866 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 20:48:24.287048   28866 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:48:24.287101   28866 cni.go:84] Creating CNI manager for ""
	I0528 20:48:24.287112   28866 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0528 20:48:24.287165   28866 start.go:340] cluster config:
	{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.38 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:48:24.287281   28866 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:48:24.288882   28866 out.go:177] * Starting "ha-908878" primary control-plane node in "ha-908878" cluster
	I0528 20:48:24.289973   28866 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:48:24.289997   28866 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 20:48:24.290018   28866 cache.go:56] Caching tarball of preloaded images
	I0528 20:48:24.290075   28866 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 20:48:24.290085   28866 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 20:48:24.290184   28866 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:48:24.290372   28866 start.go:360] acquireMachinesLock for ha-908878: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 20:48:24.290409   28866 start.go:364] duration metric: took 19.609µs to acquireMachinesLock for "ha-908878"
	I0528 20:48:24.290422   28866 start.go:96] Skipping create...Using existing machine configuration
	I0528 20:48:24.290433   28866 fix.go:54] fixHost starting: 
	I0528 20:48:24.290683   28866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:48:24.290713   28866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:48:24.303677   28866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I0528 20:48:24.304146   28866 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:48:24.304580   28866 main.go:141] libmachine: Using API Version  1
	I0528 20:48:24.304601   28866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:48:24.304847   28866 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:48:24.305032   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:48:24.305185   28866 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:48:24.306689   28866 fix.go:112] recreateIfNeeded on ha-908878: state=Running err=<nil>
	W0528 20:48:24.306720   28866 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 20:48:24.308351   28866 out.go:177] * Updating the running kvm2 "ha-908878" VM ...
	I0528 20:48:24.309711   28866 machine.go:94] provisionDockerMachine start ...
	I0528 20:48:24.309726   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:48:24.309912   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:48:24.312242   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.312751   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:24.312796   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.312942   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:48:24.313118   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:24.313242   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:24.313410   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:48:24.313596   28866 main.go:141] libmachine: Using SSH client type: native
	I0528 20:48:24.313832   28866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:48:24.313847   28866 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 20:48:24.431595   28866 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-908878
	
	I0528 20:48:24.431622   28866 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:48:24.431843   28866 buildroot.go:166] provisioning hostname "ha-908878"
	I0528 20:48:24.431866   28866 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:48:24.432014   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:48:24.434759   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.435153   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:24.435173   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.435360   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:48:24.435525   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:24.435671   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:24.435821   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:48:24.435981   28866 main.go:141] libmachine: Using SSH client type: native
	I0528 20:48:24.436195   28866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:48:24.436213   28866 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-908878 && echo "ha-908878" | sudo tee /etc/hostname
	I0528 20:48:24.570044   28866 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-908878
	
	I0528 20:48:24.570074   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:48:24.572713   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.573136   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:24.573167   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.573302   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:48:24.573494   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:24.573632   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:24.573753   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:48:24.573935   28866 main.go:141] libmachine: Using SSH client type: native
	I0528 20:48:24.574096   28866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:48:24.574112   28866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-908878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-908878/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-908878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 20:48:24.690302   28866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:48:24.690357   28866 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 20:48:24.690406   28866 buildroot.go:174] setting up certificates
	I0528 20:48:24.690420   28866 provision.go:84] configureAuth start
	I0528 20:48:24.690437   28866 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:48:24.690679   28866 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:48:24.693174   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.693527   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:24.693575   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.693628   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:48:24.695683   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.696050   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:24.696075   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.696190   28866 provision.go:143] copyHostCerts
	I0528 20:48:24.696220   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:48:24.696258   28866 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 20:48:24.696272   28866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:48:24.696332   28866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 20:48:24.696429   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:48:24.696457   28866 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 20:48:24.696467   28866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:48:24.696495   28866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 20:48:24.696547   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:48:24.696563   28866 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 20:48:24.696569   28866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:48:24.696589   28866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 20:48:24.696647   28866 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.ha-908878 san=[127.0.0.1 192.168.39.100 ha-908878 localhost minikube]
	I0528 20:48:25.053830   28866 provision.go:177] copyRemoteCerts
	I0528 20:48:25.053893   28866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 20:48:25.053914   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:48:25.056270   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:25.056647   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:25.056676   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:25.056851   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:48:25.057052   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:25.057219   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:48:25.057370   28866 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:48:25.140614   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0528 20:48:25.140675   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 20:48:25.168749   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0528 20:48:25.168820   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0528 20:48:25.198869   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0528 20:48:25.198914   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 20:48:25.222989   28866 provision.go:87] duration metric: took 532.546897ms to configureAuth
	I0528 20:48:25.223010   28866 buildroot.go:189] setting minikube options for container-runtime
	I0528 20:48:25.223203   28866 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:48:25.223281   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:48:25.225960   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:25.226327   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:25.226356   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:25.226494   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:48:25.226678   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:25.226802   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:25.226904   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:48:25.227090   28866 main.go:141] libmachine: Using SSH client type: native
	I0528 20:48:25.227237   28866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:48:25.227252   28866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 20:49:56.091138   28866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 20:49:56.091170   28866 machine.go:97] duration metric: took 1m31.781448103s to provisionDockerMachine
	I0528 20:49:56.091182   28866 start.go:293] postStartSetup for "ha-908878" (driver="kvm2")
	I0528 20:49:56.091191   28866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 20:49:56.091204   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:49:56.091547   28866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 20:49:56.091572   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:49:56.094605   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.095049   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:56.095071   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.095230   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:49:56.095444   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:49:56.095608   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:49:56.095707   28866 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:49:56.181650   28866 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 20:49:56.186256   28866 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 20:49:56.186286   28866 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 20:49:56.186355   28866 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 20:49:56.186465   28866 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 20:49:56.186479   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /etc/ssl/certs/117602.pem
	I0528 20:49:56.186665   28866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 20:49:56.195827   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:49:56.220860   28866 start.go:296] duration metric: took 129.669198ms for postStartSetup
	I0528 20:49:56.220890   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:49:56.221158   28866 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0528 20:49:56.221181   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:49:56.224021   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.224400   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:56.224444   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.224596   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:49:56.224785   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:49:56.224949   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:49:56.225145   28866 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	W0528 20:49:56.307945   28866 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0528 20:49:56.307970   28866 fix.go:56] duration metric: took 1m32.017539741s for fixHost
	I0528 20:49:56.307994   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:49:56.310432   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.310825   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:56.310852   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.310976   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:49:56.311163   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:49:56.311349   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:49:56.311508   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:49:56.311670   28866 main.go:141] libmachine: Using SSH client type: native
	I0528 20:49:56.311831   28866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:49:56.311842   28866 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 20:49:56.422632   28866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716929396.383641572
	
	I0528 20:49:56.422656   28866 fix.go:216] guest clock: 1716929396.383641572
	I0528 20:49:56.422666   28866 fix.go:229] Guest: 2024-05-28 20:49:56.383641572 +0000 UTC Remote: 2024-05-28 20:49:56.30797848 +0000 UTC m=+92.137253979 (delta=75.663092ms)
	I0528 20:49:56.422699   28866 fix.go:200] guest clock delta is within tolerance: 75.663092ms
	I0528 20:49:56.422706   28866 start.go:83] releasing machines lock for "ha-908878", held for 1m32.132288075s
	I0528 20:49:56.422726   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:49:56.422958   28866 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:49:56.425582   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.425998   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:56.426023   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.426192   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:49:56.426646   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:49:56.426809   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:49:56.426880   28866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 20:49:56.426919   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:49:56.426985   28866 ssh_runner.go:195] Run: cat /version.json
	I0528 20:49:56.427002   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:49:56.429527   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.429864   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:56.429891   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.429910   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.430039   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:49:56.430212   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:49:56.430336   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:56.430343   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:49:56.430373   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.430548   28866 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:49:56.430612   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:49:56.430723   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:49:56.430860   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:49:56.430996   28866 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:49:56.538071   28866 ssh_runner.go:195] Run: systemctl --version
	I0528 20:49:56.559683   28866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 20:49:56.764634   28866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 20:49:56.771238   28866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 20:49:56.771287   28866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 20:49:56.780671   28866 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0528 20:49:56.780688   28866 start.go:494] detecting cgroup driver to use...
	I0528 20:49:56.780750   28866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 20:49:56.797765   28866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 20:49:56.810485   28866 docker.go:217] disabling cri-docker service (if available) ...
	I0528 20:49:56.810535   28866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 20:49:56.823744   28866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 20:49:56.836390   28866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 20:49:56.993900   28866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 20:49:57.146929   28866 docker.go:233] disabling docker service ...
	I0528 20:49:57.146999   28866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 20:49:57.164741   28866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 20:49:57.178852   28866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 20:49:57.333890   28866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 20:49:57.478649   28866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 20:49:57.492514   28866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 20:49:57.511447   28866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 20:49:57.511516   28866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.521824   28866 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 20:49:57.521888   28866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.532010   28866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.542076   28866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.552040   28866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 20:49:57.565791   28866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.575541   28866 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.588966   28866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.598707   28866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 20:49:57.607571   28866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 20:49:57.616222   28866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:49:57.758765   28866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 20:49:58.061074   28866 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 20:49:58.061145   28866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 20:49:58.066417   28866 start.go:562] Will wait 60s for crictl version
	I0528 20:49:58.066464   28866 ssh_runner.go:195] Run: which crictl
	I0528 20:49:58.070375   28866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 20:49:58.111248   28866 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 20:49:58.111323   28866 ssh_runner.go:195] Run: crio --version
	I0528 20:49:58.141803   28866 ssh_runner.go:195] Run: crio --version
	I0528 20:49:58.178059   28866 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 20:49:58.179295   28866 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:49:58.181831   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:58.182159   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:58.182205   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:58.182381   28866 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 20:49:58.187120   28866 kubeadm.go:877] updating cluster {Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.38 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 20:49:58.187247   28866 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:49:58.187283   28866 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 20:49:58.232156   28866 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 20:49:58.232175   28866 crio.go:433] Images already preloaded, skipping extraction
	I0528 20:49:58.232230   28866 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 20:49:58.264195   28866 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 20:49:58.264215   28866 cache_images.go:84] Images are preloaded, skipping loading
	I0528 20:49:58.264222   28866 kubeadm.go:928] updating node { 192.168.39.100 8443 v1.30.1 crio true true} ...
	I0528 20:49:58.264333   28866 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-908878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 20:49:58.264396   28866 ssh_runner.go:195] Run: crio config
	I0528 20:49:58.308515   28866 cni.go:84] Creating CNI manager for ""
	I0528 20:49:58.308537   28866 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0528 20:49:58.308557   28866 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 20:49:58.308586   28866 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-908878 NodeName:ha-908878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 20:49:58.308707   28866 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-908878"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 20:49:58.308725   28866 kube-vip.go:115] generating kube-vip config ...
	I0528 20:49:58.308760   28866 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 20:49:58.320326   28866 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 20:49:58.320437   28866 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0528 20:49:58.320498   28866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 20:49:58.329728   28866 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 20:49:58.329798   28866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0528 20:49:58.338940   28866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0528 20:49:58.355343   28866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 20:49:58.370740   28866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0528 20:49:58.386731   28866 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0528 20:49:58.404742   28866 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0528 20:49:58.409148   28866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:49:58.554304   28866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:49:58.575430   28866 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878 for IP: 192.168.39.100
	I0528 20:49:58.575448   28866 certs.go:194] generating shared ca certs ...
	I0528 20:49:58.575469   28866 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:49:58.575612   28866 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 20:49:58.575651   28866 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 20:49:58.575660   28866 certs.go:256] generating profile certs ...
	I0528 20:49:58.575727   28866 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key
	I0528 20:49:58.575755   28866 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.f57a1a49
	I0528 20:49:58.575767   28866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.f57a1a49 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.239 192.168.39.73 192.168.39.254]
	I0528 20:49:58.804038   28866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.f57a1a49 ...
	I0528 20:49:58.804073   28866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.f57a1a49: {Name:mk40040315213a61d76b8a4de8750cbacbede3cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:49:58.804238   28866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.f57a1a49 ...
	I0528 20:49:58.804253   28866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.f57a1a49: {Name:mkb53dfe536b42922018e47461d6b9031ae3259c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:49:58.804314   28866 certs.go:381] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.f57a1a49 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt
	I0528 20:49:58.804494   28866 certs.go:385] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.f57a1a49 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key
	I0528 20:49:58.804621   28866 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key
	I0528 20:49:58.804637   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 20:49:58.804649   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0528 20:49:58.804662   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 20:49:58.804677   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 20:49:58.804691   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 20:49:58.804704   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 20:49:58.804720   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 20:49:58.804732   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 20:49:58.804779   28866 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 20:49:58.804805   28866 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 20:49:58.804814   28866 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 20:49:58.804833   28866 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 20:49:58.804865   28866 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 20:49:58.804889   28866 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 20:49:58.804925   28866 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:49:58.804962   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:49:58.804976   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem -> /usr/share/ca-certificates/11760.pem
	I0528 20:49:58.804989   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /usr/share/ca-certificates/117602.pem
	I0528 20:49:58.805524   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 20:49:58.830404   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 20:49:58.853329   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 20:49:58.876952   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 20:49:58.899682   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0528 20:49:58.921851   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 20:49:58.944472   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 20:49:58.967558   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 20:49:58.989598   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 20:49:59.012181   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 20:49:59.034280   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 20:49:59.056612   28866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 20:49:59.072298   28866 ssh_runner.go:195] Run: openssl version
	I0528 20:49:59.077801   28866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 20:49:59.088173   28866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 20:49:59.092364   28866 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 20:49:59.092404   28866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 20:49:59.097732   28866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 20:49:59.106953   28866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 20:49:59.117339   28866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 20:49:59.121543   28866 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 20:49:59.121588   28866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 20:49:59.126958   28866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 20:49:59.136435   28866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 20:49:59.147918   28866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:49:59.152210   28866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:49:59.152245   28866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:49:59.157937   28866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 20:49:59.167758   28866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 20:49:59.172177   28866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 20:49:59.177508   28866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 20:49:59.182920   28866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 20:49:59.188238   28866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 20:49:59.193393   28866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 20:49:59.198779   28866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 20:49:59.204157   28866 kubeadm.go:391] StartCluster: {Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.38 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:49:59.206385   28866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 20:49:59.206445   28866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 20:49:59.244213   28866 cri.go:89] found id: "5af8a1a44bf9b9407851d48a083b47557b4c872cdfd4995bcbad87344ac95a9c"
	I0528 20:49:59.244239   28866 cri.go:89] found id: "cb43a6411985dc31db5a9076b261726f846c9e3a2a6b14211128785dfa10a0d0"
	I0528 20:49:59.244243   28866 cri.go:89] found id: "f949602c90086db46304946ba677992a2ad4ee9ff44cc88b1780dd33f3a90fba"
	I0528 20:49:59.244247   28866 cri.go:89] found id: "8e652d16bcddb4efaa826971f662ae9d9b0c10496a7ad32cdc523787a676111c"
	I0528 20:49:59.244250   28866 cri.go:89] found id: "7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6"
	I0528 20:49:59.244253   28866 cri.go:89] found id: "0b6fe231fc7dbce25d01bf248b4e363bd410d67b785874c03d67b9895b05cd8d"
	I0528 20:49:59.244255   28866 cri.go:89] found id: "2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9"
	I0528 20:49:59.244258   28866 cri.go:89] found id: "a7ea51bf984916bead29c664359f65ea790360f73bf15486bdf96c758189bd69"
	I0528 20:49:59.244260   28866 cri.go:89] found id: "97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe"
	I0528 20:49:59.244265   28866 cri.go:89] found id: "20cf414ed60510910dc01761f0afc869180c224d4e26872a468b10e603e8e786"
	I0528 20:49:59.244268   28866 cri.go:89] found id: "05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9"
	I0528 20:49:59.244271   28866 cri.go:89] found id: "aece72d9b21aa208ac4ce8512a57af63132d5882a8675650a980fa6b531af247"
	I0528 20:49:59.244273   28866 cri.go:89] found id: "650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14"
	I0528 20:49:59.244275   28866 cri.go:89] found id: "f926e075722f1f9cbd1b9f6559baa2071c1f539f729ddcb3572a87f64a8934e9"
	I0528 20:49:59.244281   28866 cri.go:89] found id: ""
	I0528 20:49:59.244319   28866 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.520675839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ca8cf35-8bd0-4406-9c61-5d5e7b465f42 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.521277771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:23d89b9262db69731a4648e821b6ee02ddbcd64e953f442b0b1c790ad99e06bb,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716929492179728308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba9034a620e1e980ba83271c2686d0e2cc1672f83aa4b19c3789a2bcda09a040,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716929468185478357,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716929446183340156,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5457554337f0db189ea45788d726f4b927d69bfc466967041382543f02be8b80,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716929446177076854,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f796b4c1fcb382b678803794da8228990266f95adb1e99f62cb07eb6a2dc2b0e,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716929443175818326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805069c4ea3f52b056a6db0b2cbc48c32b9aec82e2eced5975d131a3d6813894,PodSandboxId:4cffb5b7c6c9c681ffed44e3b79a1b4db97beb4e3bc56f7c0bebbf9be6e48c4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716929433445060154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c6e92ce9a6765f9775e85692043edbc3cacb6d1eb3f9c07f81dc5fc71305a5,PodSandboxId:144ffb432d19700f4db1dbee070861a5bafb881fb8aaf7bd4c4b4a06bebe57fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716929411246533502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e83b4276fb38a7bed5e82c53c2dba82,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:278ab03af8f2313b514521259d04b44733ed754d0d537ac169ab50c94bc2944c,PodSandboxId:ea611d2d609918a13a673876b8f432aa70108f7177ebae9950b9f6eccbdc2ab9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716929400629960416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe5284b
67f85e03d1cd00b308ec28baa2dd3f031d560ef8bcee3e352afbd4f0,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716929400347691445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4514d4354f5473ce91d574749050b5534564282802b
dbac026aa2ea297033f90,PodSandboxId:3a8f3b7df90322d86bb148e5f38eae2fe33ca7873be9f745c0c6db25143dc42a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400480532978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a1aa224cb0622f2a2e529a50b304df8ff9e2738e1dc6698099e3a06dba136,PodSandboxId:c7cb03481617d80cb9f9dcef56558b44a28163ee1857e6c9900a3ff7ef9db308,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716929400208165709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c540689ad07f7ee30a68b6951597a2a7519d5077f0d3603cb0a035ebaab6dc15,PodSandboxId:d59153638107de764c3747df65809d3cdc474479ba91b817e5d7b3c598f84cb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400194256948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512ee36cfc30be3bbea5a2852da22e2052a8bc2f3c461e7e0f1c5bdab2356ceb,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716929400105679968,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f676094
14,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7611fb5205e4379847769c096d7bee8dd1dcbf6921b3cd3e9e213bd651f0650a,PodSandboxId:bfa5f863df0d89a8dc8be0920e4334f7380837022415720c4d0b630df3fc2adf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716929400018305553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[
string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eea72764c6cef58a30efce77aa95fcea5c0414983c225f7e96112a5a8c65c5a,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716929399958628122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kuber
netes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716928912917661950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766590753062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766572153338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716928761367501777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716928741088331292,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716928740991520694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ca8cf35-8bd0-4406-9c61-5d5e7b465f42 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.571022324Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=749c6b9d-c6f7-49ae-ad2c-77b9533733af name=/runtime.v1.RuntimeService/Version
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.571120043Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=749c6b9d-c6f7-49ae-ad2c-77b9533733af name=/runtime.v1.RuntimeService/Version
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.572160899Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d5dcdee-361f-4d40-adda-8794ac8ff24c name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.572586021Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929542572565200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d5dcdee-361f-4d40-adda-8794ac8ff24c name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.573335261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9db09d58-244a-4fa6-9808-7a20df919070 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.573411231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9db09d58-244a-4fa6-9808-7a20df919070 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.573817079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:23d89b9262db69731a4648e821b6ee02ddbcd64e953f442b0b1c790ad99e06bb,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716929492179728308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba9034a620e1e980ba83271c2686d0e2cc1672f83aa4b19c3789a2bcda09a040,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716929468185478357,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716929446183340156,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5457554337f0db189ea45788d726f4b927d69bfc466967041382543f02be8b80,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716929446177076854,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f796b4c1fcb382b678803794da8228990266f95adb1e99f62cb07eb6a2dc2b0e,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716929443175818326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805069c4ea3f52b056a6db0b2cbc48c32b9aec82e2eced5975d131a3d6813894,PodSandboxId:4cffb5b7c6c9c681ffed44e3b79a1b4db97beb4e3bc56f7c0bebbf9be6e48c4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716929433445060154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c6e92ce9a6765f9775e85692043edbc3cacb6d1eb3f9c07f81dc5fc71305a5,PodSandboxId:144ffb432d19700f4db1dbee070861a5bafb881fb8aaf7bd4c4b4a06bebe57fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716929411246533502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e83b4276fb38a7bed5e82c53c2dba82,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:278ab03af8f2313b514521259d04b44733ed754d0d537ac169ab50c94bc2944c,PodSandboxId:ea611d2d609918a13a673876b8f432aa70108f7177ebae9950b9f6eccbdc2ab9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716929400629960416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe5284b
67f85e03d1cd00b308ec28baa2dd3f031d560ef8bcee3e352afbd4f0,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716929400347691445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4514d4354f5473ce91d574749050b5534564282802b
dbac026aa2ea297033f90,PodSandboxId:3a8f3b7df90322d86bb148e5f38eae2fe33ca7873be9f745c0c6db25143dc42a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400480532978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a1aa224cb0622f2a2e529a50b304df8ff9e2738e1dc6698099e3a06dba136,PodSandboxId:c7cb03481617d80cb9f9dcef56558b44a28163ee1857e6c9900a3ff7ef9db308,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716929400208165709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c540689ad07f7ee30a68b6951597a2a7519d5077f0d3603cb0a035ebaab6dc15,PodSandboxId:d59153638107de764c3747df65809d3cdc474479ba91b817e5d7b3c598f84cb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400194256948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512ee36cfc30be3bbea5a2852da22e2052a8bc2f3c461e7e0f1c5bdab2356ceb,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716929400105679968,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f676094
14,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7611fb5205e4379847769c096d7bee8dd1dcbf6921b3cd3e9e213bd651f0650a,PodSandboxId:bfa5f863df0d89a8dc8be0920e4334f7380837022415720c4d0b630df3fc2adf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716929400018305553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[
string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eea72764c6cef58a30efce77aa95fcea5c0414983c225f7e96112a5a8c65c5a,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716929399958628122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kuber
netes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716928912917661950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766590753062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766572153338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716928761367501777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716928741088331292,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716928740991520694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9db09d58-244a-4fa6-9808-7a20df919070 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.624945774Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=157fd348-ac92-46ec-aec7-321ae25685d6 name=/runtime.v1.RuntimeService/Version
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.625054578Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=157fd348-ac92-46ec-aec7-321ae25685d6 name=/runtime.v1.RuntimeService/Version
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.626319138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4776ad6f-5892-49da-8878-555bd87e4a48 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.627018350Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929542626987255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4776ad6f-5892-49da-8878-555bd87e4a48 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.627667678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45b15216-d2ef-4776-b36d-002e5ce40696 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.627743408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45b15216-d2ef-4776-b36d-002e5ce40696 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.628408879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:23d89b9262db69731a4648e821b6ee02ddbcd64e953f442b0b1c790ad99e06bb,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716929492179728308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba9034a620e1e980ba83271c2686d0e2cc1672f83aa4b19c3789a2bcda09a040,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716929468185478357,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716929446183340156,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5457554337f0db189ea45788d726f4b927d69bfc466967041382543f02be8b80,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716929446177076854,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f796b4c1fcb382b678803794da8228990266f95adb1e99f62cb07eb6a2dc2b0e,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716929443175818326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805069c4ea3f52b056a6db0b2cbc48c32b9aec82e2eced5975d131a3d6813894,PodSandboxId:4cffb5b7c6c9c681ffed44e3b79a1b4db97beb4e3bc56f7c0bebbf9be6e48c4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716929433445060154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c6e92ce9a6765f9775e85692043edbc3cacb6d1eb3f9c07f81dc5fc71305a5,PodSandboxId:144ffb432d19700f4db1dbee070861a5bafb881fb8aaf7bd4c4b4a06bebe57fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716929411246533502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e83b4276fb38a7bed5e82c53c2dba82,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:278ab03af8f2313b514521259d04b44733ed754d0d537ac169ab50c94bc2944c,PodSandboxId:ea611d2d609918a13a673876b8f432aa70108f7177ebae9950b9f6eccbdc2ab9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716929400629960416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe5284b
67f85e03d1cd00b308ec28baa2dd3f031d560ef8bcee3e352afbd4f0,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716929400347691445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4514d4354f5473ce91d574749050b5534564282802b
dbac026aa2ea297033f90,PodSandboxId:3a8f3b7df90322d86bb148e5f38eae2fe33ca7873be9f745c0c6db25143dc42a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400480532978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a1aa224cb0622f2a2e529a50b304df8ff9e2738e1dc6698099e3a06dba136,PodSandboxId:c7cb03481617d80cb9f9dcef56558b44a28163ee1857e6c9900a3ff7ef9db308,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716929400208165709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c540689ad07f7ee30a68b6951597a2a7519d5077f0d3603cb0a035ebaab6dc15,PodSandboxId:d59153638107de764c3747df65809d3cdc474479ba91b817e5d7b3c598f84cb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400194256948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512ee36cfc30be3bbea5a2852da22e2052a8bc2f3c461e7e0f1c5bdab2356ceb,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716929400105679968,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f676094
14,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7611fb5205e4379847769c096d7bee8dd1dcbf6921b3cd3e9e213bd651f0650a,PodSandboxId:bfa5f863df0d89a8dc8be0920e4334f7380837022415720c4d0b630df3fc2adf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716929400018305553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[
string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eea72764c6cef58a30efce77aa95fcea5c0414983c225f7e96112a5a8c65c5a,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716929399958628122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kuber
netes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716928912917661950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766590753062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766572153338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716928761367501777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716928741088331292,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716928740991520694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=45b15216-d2ef-4776-b36d-002e5ce40696 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.676223714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39c31291-17fd-468a-a948-69cfa249ef40 name=/runtime.v1.RuntimeService/Version
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.676526843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39c31291-17fd-468a-a948-69cfa249ef40 name=/runtime.v1.RuntimeService/Version
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.678040299Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f63600c0-31a4-4a6e-b319-06297e057925 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.678652365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929542678624756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f63600c0-31a4-4a6e-b319-06297e057925 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.679282687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3eddc57-e72f-41ff-a984-83174c10a3b0 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.679346476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3eddc57-e72f-41ff-a984-83174c10a3b0 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.679971822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:23d89b9262db69731a4648e821b6ee02ddbcd64e953f442b0b1c790ad99e06bb,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716929492179728308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba9034a620e1e980ba83271c2686d0e2cc1672f83aa4b19c3789a2bcda09a040,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716929468185478357,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716929446183340156,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5457554337f0db189ea45788d726f4b927d69bfc466967041382543f02be8b80,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716929446177076854,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f796b4c1fcb382b678803794da8228990266f95adb1e99f62cb07eb6a2dc2b0e,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716929443175818326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805069c4ea3f52b056a6db0b2cbc48c32b9aec82e2eced5975d131a3d6813894,PodSandboxId:4cffb5b7c6c9c681ffed44e3b79a1b4db97beb4e3bc56f7c0bebbf9be6e48c4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716929433445060154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c6e92ce9a6765f9775e85692043edbc3cacb6d1eb3f9c07f81dc5fc71305a5,PodSandboxId:144ffb432d19700f4db1dbee070861a5bafb881fb8aaf7bd4c4b4a06bebe57fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716929411246533502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e83b4276fb38a7bed5e82c53c2dba82,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:278ab03af8f2313b514521259d04b44733ed754d0d537ac169ab50c94bc2944c,PodSandboxId:ea611d2d609918a13a673876b8f432aa70108f7177ebae9950b9f6eccbdc2ab9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716929400629960416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe5284b
67f85e03d1cd00b308ec28baa2dd3f031d560ef8bcee3e352afbd4f0,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716929400347691445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4514d4354f5473ce91d574749050b5534564282802b
dbac026aa2ea297033f90,PodSandboxId:3a8f3b7df90322d86bb148e5f38eae2fe33ca7873be9f745c0c6db25143dc42a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400480532978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a1aa224cb0622f2a2e529a50b304df8ff9e2738e1dc6698099e3a06dba136,PodSandboxId:c7cb03481617d80cb9f9dcef56558b44a28163ee1857e6c9900a3ff7ef9db308,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716929400208165709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c540689ad07f7ee30a68b6951597a2a7519d5077f0d3603cb0a035ebaab6dc15,PodSandboxId:d59153638107de764c3747df65809d3cdc474479ba91b817e5d7b3c598f84cb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400194256948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512ee36cfc30be3bbea5a2852da22e2052a8bc2f3c461e7e0f1c5bdab2356ceb,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716929400105679968,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f676094
14,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7611fb5205e4379847769c096d7bee8dd1dcbf6921b3cd3e9e213bd651f0650a,PodSandboxId:bfa5f863df0d89a8dc8be0920e4334f7380837022415720c4d0b630df3fc2adf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716929400018305553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[
string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eea72764c6cef58a30efce77aa95fcea5c0414983c225f7e96112a5a8c65c5a,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716929399958628122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kuber
netes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716928912917661950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766590753062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766572153338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716928761367501777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716928741088331292,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716928740991520694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3eddc57-e72f-41ff-a984-83174c10a3b0 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.692437613Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=2af1779e-0eb0-4548-94b0-a7910a6f0aba name=/runtime.v1.RuntimeService/Version
	May 28 20:52:22 ha-908878 crio[3860]: time="2024-05-28 20:52:22.692541671Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2af1779e-0eb0-4548-94b0-a7910a6f0aba name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	23d89b9262db6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      50 seconds ago       Running             storage-provisioner       4                   a10360900a668       storage-provisioner
	ba9034a620e1e       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               3                   cf4ac2dd6e8f4       kindnet-x4mzh
	4de963c4394e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   a10360900a668       storage-provisioner
	5457554337f0d       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            3                   a140a8d888355       kube-apiserver-ha-908878
	f796b4c1fcb38       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   2                   fc72b589827f1       kube-controller-manager-ha-908878
	805069c4ea3f5       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   4cffb5b7c6c9c       busybox-fc5497c4f-ljbzs
	41c6e92ce9a67       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   144ffb432d197       kube-vip-ha-908878
	278ab03af8f23       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      2 minutes ago        Running             kube-proxy                1                   ea611d2d60991       kube-proxy-ng8mq
	4514d4354f547       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   3a8f3b7df9032       coredns-7db6d8ff4d-mvx67
	bbe5284b67f85       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      2 minutes ago        Exited              kindnet-cni               2                   cf4ac2dd6e8f4       kindnet-x4mzh
	7d3a1aa224cb0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   c7cb03481617d       etcd-ha-908878
	c540689ad07f7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   d59153638107d       coredns-7db6d8ff4d-5fmns
	512ee36cfc30b       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Exited              kube-controller-manager   1                   fc72b589827f1       kube-controller-manager-ha-908878
	7611fb5205e43       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      2 minutes ago        Running             kube-scheduler            1                   bfa5f863df0d8       kube-scheduler-ha-908878
	1eea72764c6ce       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Exited              kube-apiserver            2                   a140a8d888355       kube-apiserver-ha-908878
	92c83dd481e56       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   dfbac4c22bc27       busybox-fc5497c4f-ljbzs
	7c38e07fa546e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   fb8a83ba500b4       coredns-7db6d8ff4d-mvx67
	2470320e3bec5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Exited              coredns                   0                   5333c6894c446       coredns-7db6d8ff4d-5fmns
	97ba5f2725852       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago       Exited              kube-proxy                0                   2a5f076d2569c       kube-proxy-ng8mq
	05d5882852e6e       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago       Exited              kube-scheduler            0                   54beb07b658e5       kube-scheduler-ha-908878
	650c6f374c3b3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   232d528c76896       etcd-ha-908878
	
	
	==> coredns [2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9] <==
	[INFO] 10.244.2.2:41613 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002489251s
	[INFO] 10.244.2.2:55408 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147549s
	[INFO] 10.244.0.4:57170 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000374705s
	[INFO] 10.244.0.4:58966 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155963s
	[INFO] 10.244.0.4:35423 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111865s
	[INFO] 10.244.1.2:37835 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079714s
	[INFO] 10.244.1.2:45922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128914s
	[INFO] 10.244.2.2:49120 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102234s
	[INFO] 10.244.2.2:59817 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113316s
	[INFO] 10.244.1.2:33990 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104132s
	[INFO] 10.244.1.2:57343 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065665s
	[INFO] 10.244.1.2:37008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144249s
	[INFO] 10.244.2.2:57641 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201576s
	[INFO] 10.244.0.4:55430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016202s
	[INFO] 10.244.0.4:58197 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154574s
	[INFO] 10.244.0.4:43002 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159971s
	[INFO] 10.244.1.2:33008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159565s
	[INFO] 10.244.1.2:55799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106231s
	[INFO] 10.244.1.2:34935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119985s
	[INFO] 10.244.1.2:55524 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077247s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4514d4354f5473ce91d574749050b5534564282802bdbac026aa2ea297033f90] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:52778->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:52778->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:52788->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:52788->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6] <==
	[INFO] 10.244.2.2:58602 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170027s
	[INFO] 10.244.0.4:43029 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001811296s
	[INFO] 10.244.0.4:49612 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098819s
	[INFO] 10.244.0.4:33728 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000042492s
	[INFO] 10.244.0.4:34284 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001158314s
	[INFO] 10.244.0.4:52540 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045508s
	[INFO] 10.244.1.2:36534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139592s
	[INFO] 10.244.1.2:55059 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00181265s
	[INFO] 10.244.1.2:57133 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001147785s
	[INFO] 10.244.1.2:59156 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008284s
	[INFO] 10.244.1.2:56011 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189969s
	[INFO] 10.244.1.2:57157 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076075s
	[INFO] 10.244.2.2:38176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112538s
	[INFO] 10.244.2.2:54457 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111343s
	[INFO] 10.244.0.4:46728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104994s
	[INFO] 10.244.0.4:49514 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077463s
	[INFO] 10.244.0.4:40805 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103396s
	[INFO] 10.244.0.4:41445 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093035s
	[INFO] 10.244.1.2:48615 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169745s
	[INFO] 10.244.2.2:39740 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00022698s
	[INFO] 10.244.2.2:42139 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182159s
	[INFO] 10.244.2.2:54665 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00035602s
	[INFO] 10.244.0.4:33063 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104255s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c540689ad07f7ee30a68b6951597a2a7519d5077f0d3603cb0a035ebaab6dc15] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:60436->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:60436->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-908878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T20_39_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:52:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:50:50 +0000   Tue, 28 May 2024 20:39:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:50:50 +0000   Tue, 28 May 2024 20:39:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:50:50 +0000   Tue, 28 May 2024 20:39:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:50:50 +0000   Tue, 28 May 2024 20:39:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-908878
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a470f4bebd094a03b2a08db3a205d097
	  System UUID:                a470f4be-bd09-4a03-b2a0-8db3a205d097
	  Boot ID:                    e5dc2485-8c44-4c4f-899c-7eb02750525b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ljbzs              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-5fmns             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-mvx67             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-908878                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-x4mzh                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-908878             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-908878    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-ng8mq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-908878             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-908878                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 94s    kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-908878 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-908878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-908878 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Normal   NodeReady                12m    kubelet          Node ha-908878 status is now: NodeReady
	  Normal   RegisteredNode           11m    node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Normal   RegisteredNode           10m    node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Warning  ContainerGCFailed        3m16s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           86s    node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Normal   RegisteredNode           83s    node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Normal   RegisteredNode           29s    node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	
	
	Name:               ha-908878-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T20_40_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:40:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:52:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:51:31 +0000   Tue, 28 May 2024 20:50:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:51:31 +0000   Tue, 28 May 2024 20:50:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:51:31 +0000   Tue, 28 May 2024 20:50:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:51:31 +0000   Tue, 28 May 2024 20:50:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    ha-908878-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f91cea3af174de9a05db650e4662bbb
	  System UUID:                8f91cea3-af17-4de9-a05d-b650e4662bbb
	  Boot ID:                    6b8d7163-e895-4f42-9b4a-9c98cd4f26a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rfl74                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-908878-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-6prxw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-908878-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-908878-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-pg89k                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-908878-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-908878-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 76s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-908878-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-908878-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-908878-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  NodeNotReady             8m49s                node-controller  Node ha-908878-m02 status is now: NodeNotReady
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node ha-908878-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node ha-908878-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x7 over 2m1s)  kubelet          Node ha-908878-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           86s                  node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  RegisteredNode           83s                  node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  RegisteredNode           29s                  node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	
	
	Name:               ha-908878-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T20_41_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:41:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:52:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:51:56 +0000   Tue, 28 May 2024 20:41:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:51:56 +0000   Tue, 28 May 2024 20:41:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:51:56 +0000   Tue, 28 May 2024 20:41:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:51:56 +0000   Tue, 28 May 2024 20:41:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    ha-908878-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2e3e9f9367694cccab6cb31074c7abc1
	  System UUID:                2e3e9f93-6769-4ccc-ab6c-b31074c7abc1
	  Boot ID:                    9caaf246-f53e-406c-af94-f1f7a30e3662
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ldbfj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-908878-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-fx2nj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-908878-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-908878-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-4vjp6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-908878-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-908878-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 38s                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-908878-m03 event: Registered Node ha-908878-m03 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-908878-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-908878-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-908878-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-908878-m03 event: Registered Node ha-908878-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-908878-m03 event: Registered Node ha-908878-m03 in Controller
	  Normal   RegisteredNode           85s                node-controller  Node ha-908878-m03 event: Registered Node ha-908878-m03 in Controller
	  Normal   RegisteredNode           83s                node-controller  Node ha-908878-m03 event: Registered Node ha-908878-m03 in Controller
	  Normal   Starting                 57s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  57s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  57s                kubelet          Node ha-908878-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s                kubelet          Node ha-908878-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s                kubelet          Node ha-908878-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 57s                kubelet          Node ha-908878-m03 has been rebooted, boot id: 9caaf246-f53e-406c-af94-f1f7a30e3662
	  Normal   RegisteredNode           29s                node-controller  Node ha-908878-m03 event: Registered Node ha-908878-m03 in Controller
	
	
	Name:               ha-908878-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T20_42_26_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:42:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:52:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:52:14 +0000   Tue, 28 May 2024 20:52:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:52:14 +0000   Tue, 28 May 2024 20:52:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:52:14 +0000   Tue, 28 May 2024 20:52:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:52:14 +0000   Tue, 28 May 2024 20:52:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    ha-908878-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3c86941732a4e078803ce72d6cca1eb
	  System UUID:                f3c86941-732a-4e07-8803-ce72d6cca1eb
	  Boot ID:                    43d26a07-1717-47b3-b09c-28f8499f97e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-68kxq       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m58s
	  kube-system                 kube-proxy-bnh2w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m52s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m58s (x2 over 9m58s)  kubelet          Node ha-908878-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m58s (x2 over 9m58s)  kubelet          Node ha-908878-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m58s (x2 over 9m58s)  kubelet          Node ha-908878-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m56s                  node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal   RegisteredNode           9m56s                  node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal   RegisteredNode           9m54s                  node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal   NodeReady                9m48s                  kubelet          Node ha-908878-m04 status is now: NodeReady
	  Normal   RegisteredNode           85s                    node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal   RegisteredNode           83s                    node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal   NodeNotReady             45s                    node-controller  Node ha-908878-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           29s                    node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal   Starting                 9s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)        kubelet          Node ha-908878-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)        kubelet          Node ha-908878-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)        kubelet          Node ha-908878-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                     kubelet          Node ha-908878-m04 has been rebooted, boot id: 43d26a07-1717-47b3-b09c-28f8499f97e0
	  Normal   NodeReady                9s                     kubelet          Node ha-908878-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.578430] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.054216] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052934] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.180850] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.119729] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.261744] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.070195] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +5.007183] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[  +0.062643] kauditd_printk_skb: 158 callbacks suppressed
	[May28 20:39] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.085155] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.532403] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.860818] kauditd_printk_skb: 38 callbacks suppressed
	[May28 20:40] kauditd_printk_skb: 24 callbacks suppressed
	[May28 20:49] systemd-fstab-generator[3779]: Ignoring "noauto" option for root device
	[  +0.157839] systemd-fstab-generator[3791]: Ignoring "noauto" option for root device
	[  +0.181723] systemd-fstab-generator[3805]: Ignoring "noauto" option for root device
	[  +0.155164] systemd-fstab-generator[3817]: Ignoring "noauto" option for root device
	[  +0.272755] systemd-fstab-generator[3845]: Ignoring "noauto" option for root device
	[  +0.796593] systemd-fstab-generator[3958]: Ignoring "noauto" option for root device
	[May28 20:50] kauditd_printk_skb: 223 callbacks suppressed
	[ +11.598626] kauditd_printk_skb: 1 callbacks suppressed
	[ +39.340094] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14] <==
	2024/05/28 20:48:25 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-28T20:48:25.37315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"667.06506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-05-28T20:48:25.373212Z","caller":"traceutil/trace.go:171","msg":"trace[370914826] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; }","duration":"667.160516ms","start":"2024-05-28T20:48:24.706046Z","end":"2024-05-28T20:48:25.373207Z","steps":["trace[370914826] 'agreement among raft nodes before linearized reading'  (duration: 667.088851ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:48:25.37329Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T20:48:24.706033Z","time spent":"667.211188ms","remote":"127.0.0.1:45608","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:500 "}
	2024/05/28 20:48:25 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-28T20:48:25.432024Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T20:48:25.43244Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-28T20:48:25.432546Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3276445ff8d31e34","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-28T20:48:25.432815Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.432917Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.432974Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.433131Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.433234Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.43327Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.4333Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.433308Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.433316Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.43337Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.433467Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.433512Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.43354Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.43355Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.436166Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-05-28T20:48:25.436301Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-05-28T20:48:25.436333Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-908878","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	
	
	==> etcd [7d3a1aa224cb0622f2a2e529a50b304df8ff9e2738e1dc6698099e3a06dba136] <==
	{"level":"warn","ts":"2024-05-28T20:51:21.469444Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"94cfe90357540c6b","rtt":"0s","error":"dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-28T20:51:25.123502Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.73:2380/version","remote-member-id":"94cfe90357540c6b","error":"Get \"https://192.168.39.73:2380/version\": dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-28T20:51:25.123573Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"94cfe90357540c6b","error":"Get \"https://192.168.39.73:2380/version\": dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-28T20:51:26.470421Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"94cfe90357540c6b","rtt":"0s","error":"dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-28T20:51:26.470452Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"94cfe90357540c6b","rtt":"0s","error":"dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-28T20:51:29.125991Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.73:2380/version","remote-member-id":"94cfe90357540c6b","error":"Get \"https://192.168.39.73:2380/version\": dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-28T20:51:29.126171Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"94cfe90357540c6b","error":"Get \"https://192.168.39.73:2380/version\": dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"info","ts":"2024-05-28T20:51:30.030347Z","caller":"traceutil/trace.go:171","msg":"trace[1970610536] linearizableReadLoop","detail":"{readStateIndex:2626; appliedIndex:2626; }","duration":"120.533364ms","start":"2024-05-28T20:51:29.909779Z","end":"2024-05-28T20:51:30.030313Z","steps":["trace[1970610536] 'read index received'  (duration: 120.527177ms)","trace[1970610536] 'applied index is now lower than readState.Index'  (duration: 4.607µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T20:51:30.030592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.759804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-ha-908878-m03\" ","response":"range_response_count:1 size:6759"}
	{"level":"info","ts":"2024-05-28T20:51:30.03071Z","caller":"traceutil/trace.go:171","msg":"trace[2018935382] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-ha-908878-m03; range_end:; response_count:1; response_revision:2269; }","duration":"120.939721ms","start":"2024-05-28T20:51:29.909754Z","end":"2024-05-28T20:51:30.030694Z","steps":["trace[2018935382] 'agreement among raft nodes before linearized reading'  (duration: 120.682625ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T20:51:30.030801Z","caller":"traceutil/trace.go:171","msg":"trace[2142274164] transaction","detail":"{read_only:false; response_revision:2270; number_of_response:1; }","duration":"151.277132ms","start":"2024-05-28T20:51:29.879512Z","end":"2024-05-28T20:51:30.030789Z","steps":["trace[2142274164] 'process raft request'  (duration: 151.101188ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:51:31.471518Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"94cfe90357540c6b","rtt":"0s","error":"dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-28T20:51:31.471592Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"94cfe90357540c6b","rtt":"0s","error":"dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-28T20:51:33.128479Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.73:2380/version","remote-member-id":"94cfe90357540c6b","error":"Get \"https://192.168.39.73:2380/version\": dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-28T20:51:33.128536Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"94cfe90357540c6b","error":"Get \"https://192.168.39.73:2380/version\": dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"info","ts":"2024-05-28T20:51:35.837731Z","caller":"traceutil/trace.go:171","msg":"trace[1064552491] transaction","detail":"{read_only:false; response_revision:2290; number_of_response:1; }","duration":"118.096322ms","start":"2024-05-28T20:51:35.719615Z","end":"2024-05-28T20:51:35.837711Z","steps":["trace[1064552491] 'process raft request'  (duration: 117.999511ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:51:36.471587Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"94cfe90357540c6b","rtt":"0s","error":"dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-28T20:51:36.472089Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"94cfe90357540c6b","rtt":"0s","error":"dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"info","ts":"2024-05-28T20:51:36.561915Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:51:36.562033Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:51:36.562188Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:51:36.594302Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3276445ff8d31e34","to":"94cfe90357540c6b","stream-type":"stream Message"}
	{"level":"info","ts":"2024-05-28T20:51:36.594368Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:51:36.59472Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3276445ff8d31e34","to":"94cfe90357540c6b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-05-28T20:51:36.594745Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	
	
	==> kernel <==
	 20:52:23 up 13 min,  0 users,  load average: 0.86, 0.59, 0.33
	Linux ha-908878 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ba9034a620e1e980ba83271c2686d0e2cc1672f83aa4b19c3789a2bcda09a040] <==
	I0528 20:51:49.110512       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	I0528 20:51:59.128968       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0528 20:51:59.129031       1 main.go:227] handling current node
	I0528 20:51:59.129053       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0528 20:51:59.129058       1 main.go:250] Node ha-908878-m02 has CIDR [10.244.1.0/24] 
	I0528 20:51:59.129296       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0528 20:51:59.129304       1 main.go:250] Node ha-908878-m03 has CIDR [10.244.2.0/24] 
	I0528 20:51:59.130015       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0528 20:51:59.130057       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	I0528 20:52:09.155667       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0528 20:52:09.155732       1 main.go:227] handling current node
	I0528 20:52:09.155761       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0528 20:52:09.155766       1 main.go:250] Node ha-908878-m02 has CIDR [10.244.1.0/24] 
	I0528 20:52:09.155966       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0528 20:52:09.155994       1 main.go:250] Node ha-908878-m03 has CIDR [10.244.2.0/24] 
	I0528 20:52:09.156060       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0528 20:52:09.156081       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	I0528 20:52:19.174132       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0528 20:52:19.174231       1 main.go:227] handling current node
	I0528 20:52:19.174277       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0528 20:52:19.174295       1 main.go:250] Node ha-908878-m02 has CIDR [10.244.1.0/24] 
	I0528 20:52:19.174419       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0528 20:52:19.174440       1 main.go:250] Node ha-908878-m03 has CIDR [10.244.2.0/24] 
	I0528 20:52:19.174574       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0528 20:52:19.174613       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bbe5284b67f85e03d1cd00b308ec28baa2dd3f031d560ef8bcee3e352afbd4f0] <==
	I0528 20:50:01.030652       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0528 20:50:18.658985       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0528 20:50:21.729380       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0528 20:50:27.875149       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0528 20:50:30.945338       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0528 20:50:33.947984       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kube-apiserver [1eea72764c6cef58a30efce77aa95fcea5c0414983c225f7e96112a5a8c65c5a] <==
	I0528 20:50:00.730975       1 options.go:221] external host was not specified, using 192.168.39.100
	I0528 20:50:00.732047       1 server.go:148] Version: v1.30.1
	I0528 20:50:00.732119       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:50:01.884835       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0528 20:50:01.894243       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 20:50:01.897938       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0528 20:50:01.897973       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0528 20:50:01.898154       1 instance.go:299] Using reconciler: lease
	W0528 20:50:21.883703       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0528 20:50:21.883820       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0528 20:50:21.898845       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0528 20:50:21.898859       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [5457554337f0db189ea45788d726f4b927d69bfc466967041382543f02be8b80] <==
	I0528 20:50:48.111041       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0528 20:50:48.111147       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0528 20:50:48.182270       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0528 20:50:48.195819       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 20:50:48.195927       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0528 20:50:48.195957       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0528 20:50:48.195929       1 policy_source.go:224] refreshing policies
	I0528 20:50:48.196420       1 shared_informer.go:320] Caches are synced for configmaps
	I0528 20:50:48.198598       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0528 20:50:48.200454       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0528 20:50:48.200644       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0528 20:50:48.207918       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0528 20:50:48.211133       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0528 20:50:48.211226       1 aggregator.go:165] initial CRD sync complete...
	I0528 20:50:48.211289       1 autoregister_controller.go:141] Starting autoregister controller
	I0528 20:50:48.211330       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0528 20:50:48.211355       1 cache.go:39] Caches are synced for autoregister controller
	I0528 20:50:48.287070       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0528 20:50:48.303666       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.239 192.168.39.73]
	I0528 20:50:48.305017       1 controller.go:615] quota admission added evaluator for: endpoints
	I0528 20:50:48.321855       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0528 20:50:48.331594       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0528 20:50:49.108022       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0528 20:50:49.549403       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.239 192.168.39.73]
	W0528 20:50:59.550731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.239]
	
	
	==> kube-controller-manager [512ee36cfc30be3bbea5a2852da22e2052a8bc2f3c461e7e0f1c5bdab2356ceb] <==
	I0528 20:50:01.470257       1 serving.go:380] Generated self-signed cert in-memory
	I0528 20:50:02.164786       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0528 20:50:02.164832       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:50:02.166698       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0528 20:50:02.166847       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 20:50:02.167131       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0528 20:50:02.167273       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0528 20:50:22.906158       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.100:8443/healthz\": dial tcp 192.168.39.100:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f796b4c1fcb382b678803794da8228990266f95adb1e99f62cb07eb6a2dc2b0e] <==
	I0528 20:51:00.496750       1 shared_informer.go:320] Caches are synced for TTL
	I0528 20:51:00.499100       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0528 20:51:00.504595       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0528 20:51:00.509219       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0528 20:51:00.509233       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0528 20:51:00.509244       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0528 20:51:00.510561       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0528 20:51:00.576570       1 shared_informer.go:320] Caches are synced for stateful set
	I0528 20:51:00.632191       1 shared_informer.go:320] Caches are synced for resource quota
	I0528 20:51:00.639855       1 shared_informer.go:320] Caches are synced for disruption
	I0528 20:51:00.677293       1 shared_informer.go:320] Caches are synced for resource quota
	I0528 20:51:01.134442       1 shared_informer.go:320] Caches are synced for garbage collector
	I0528 20:51:01.135669       1 shared_informer.go:320] Caches are synced for garbage collector
	I0528 20:51:01.135720       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0528 20:51:10.566163       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.604364ms"
	I0528 20:51:10.566309       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.735µs"
	I0528 20:51:16.326504       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.492799ms"
	I0528 20:51:16.328198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.511035ms"
	I0528 20:51:16.330446       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-vvjpf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-vvjpf\": the object has been modified; please apply your changes to the latest version and try again"
	I0528 20:51:16.330770       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fa07a18a-5fb7-4a15-9ff8-6729e550f12c", APIVersion:"v1", ResourceVersion:"247", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-vvjpf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-vvjpf": the object has been modified; please apply your changes to the latest version and try again
	I0528 20:51:27.392189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.744045ms"
	I0528 20:51:27.392329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.189µs"
	I0528 20:51:43.797501       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.056637ms"
	I0528 20:51:43.798855       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.6µs"
	I0528 20:52:14.735074       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-908878-m04"
	
	
	==> kube-proxy [278ab03af8f2313b514521259d04b44733ed754d0d537ac169ab50c94bc2944c] <==
	I0528 20:50:02.120613       1 server_linux.go:69] "Using iptables proxy"
	E0528 20:50:04.962702       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-908878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0528 20:50:08.033299       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-908878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0528 20:50:11.105324       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-908878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0528 20:50:17.249326       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-908878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0528 20:50:29.537227       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-908878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0528 20:50:48.510627       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0528 20:50:48.549222       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 20:50:48.549285       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 20:50:48.549303       1 server_linux.go:165] "Using iptables Proxier"
	I0528 20:50:48.552031       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 20:50:48.552207       1 server.go:872] "Version info" version="v1.30.1"
	I0528 20:50:48.552239       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:50:48.554139       1 config.go:192] "Starting service config controller"
	I0528 20:50:48.554172       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 20:50:48.554193       1 config.go:101] "Starting endpoint slice config controller"
	I0528 20:50:48.554197       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 20:50:48.554579       1 config.go:319] "Starting node config controller"
	I0528 20:50:48.554609       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 20:50:48.654934       1 shared_informer.go:320] Caches are synced for node config
	I0528 20:50:48.654981       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 20:50:48.654944       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe] <==
	E0528 20:47:12.929634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:16.001367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:16.001467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:16.001422       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:16.001555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:16.001503       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:16.001629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:22.146230       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:22.146708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:22.146855       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:22.146956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:22.146980       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:22.147075       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:31.361841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:31.362504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:34.434132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:34.434215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:34.434278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:34.434322       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:55.939092       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:55.939445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:59.009578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:59.009782       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:48:02.084397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:48:02.084635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9] <==
	W0528 20:48:22.870491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 20:48:22.870524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0528 20:48:23.121957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 20:48:23.122005       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 20:48:23.311635       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0528 20:48:23.311711       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0528 20:48:23.925367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 20:48:23.925435       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0528 20:48:23.958612       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 20:48:23.958655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 20:48:24.050096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 20:48:24.050145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 20:48:24.080678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 20:48:24.080731       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 20:48:24.536591       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 20:48:24.536620       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0528 20:48:24.897653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 20:48:24.897756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 20:48:24.921809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 20:48:24.921966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0528 20:48:25.083149       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 20:48:25.083239       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 20:48:25.237236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 20:48:25.237327       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 20:48:25.341479       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7611fb5205e4379847769c096d7bee8dd1dcbf6921b3cd3e9e213bd651f0650a] <==
	W0528 20:50:38.824995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.100:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:38.825156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.100:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:39.186584       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.100:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:39.186714       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.100:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:39.924304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.100:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:39.924409       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.100:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:40.634832       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.100:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:40.635023       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.100:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:40.850008       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:40.850111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:41.661066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.100:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:41.661144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.100:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:42.204726       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.100:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:42.204784       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.100:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:42.482457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:42.482515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:43.353411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:43.353472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:43.438638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.100:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:43.438814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.100:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:43.641828       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.100:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:43.642145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.100:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:43.810629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:43.810707       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	I0528 20:51:03.611991       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 20:50:49 ha-908878 kubelet[1380]: I0528 20:50:49.726424    1380 scope.go:117] "RemoveContainer" containerID="4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8"
	May 28 20:50:49 ha-908878 kubelet[1380]: E0528 20:50:49.726681    1380 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d79872e2-b267-446a-99dc-5bf9f398d31c)\"" pod="kube-system/storage-provisioner" podUID="d79872e2-b267-446a-99dc-5bf9f398d31c"
	May 28 20:50:53 ha-908878 kubelet[1380]: I0528 20:50:53.160538    1380 scope.go:117] "RemoveContainer" containerID="bbe5284b67f85e03d1cd00b308ec28baa2dd3f031d560ef8bcee3e352afbd4f0"
	May 28 20:50:53 ha-908878 kubelet[1380]: E0528 20:50:53.162077    1380 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kindnet-cni pod=kindnet-x4mzh_kube-system(8069a7ea-0ab1-4064-b982-867dbdfd97aa)\"" pod="kube-system/kindnet-x4mzh" podUID="8069a7ea-0ab1-4064-b982-867dbdfd97aa"
	May 28 20:51:04 ha-908878 kubelet[1380]: I0528 20:51:04.160553    1380 scope.go:117] "RemoveContainer" containerID="4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8"
	May 28 20:51:04 ha-908878 kubelet[1380]: E0528 20:51:04.160842    1380 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d79872e2-b267-446a-99dc-5bf9f398d31c)\"" pod="kube-system/storage-provisioner" podUID="d79872e2-b267-446a-99dc-5bf9f398d31c"
	May 28 20:51:07 ha-908878 kubelet[1380]: E0528 20:51:07.196171    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:51:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:51:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:51:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:51:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:51:08 ha-908878 kubelet[1380]: I0528 20:51:08.160775    1380 scope.go:117] "RemoveContainer" containerID="bbe5284b67f85e03d1cd00b308ec28baa2dd3f031d560ef8bcee3e352afbd4f0"
	May 28 20:51:16 ha-908878 kubelet[1380]: I0528 20:51:16.033097    1380 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-ljbzs" podStartSLOduration=564.317045393 podStartE2EDuration="9m27.033040567s" podCreationTimestamp="2024-05-28 20:41:49 +0000 UTC" firstStartedPulling="2024-05-28 20:41:50.183077312 +0000 UTC m=+163.160475270" lastFinishedPulling="2024-05-28 20:41:52.899072498 +0000 UTC m=+165.876470444" observedRunningTime="2024-05-28 20:41:53.90458958 +0000 UTC m=+166.881987543" watchObservedRunningTime="2024-05-28 20:51:16.033040567 +0000 UTC m=+729.010438528"
	May 28 20:51:17 ha-908878 kubelet[1380]: I0528 20:51:17.161859    1380 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-908878" podUID="45e2e6e2-6ea1-4498-aca1-9c3060eb9ca4"
	May 28 20:51:17 ha-908878 kubelet[1380]: I0528 20:51:17.188455    1380 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-908878"
	May 28 20:51:17 ha-908878 kubelet[1380]: I0528 20:51:17.930484    1380 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-908878" podUID="45e2e6e2-6ea1-4498-aca1-9c3060eb9ca4"
	May 28 20:51:19 ha-908878 kubelet[1380]: I0528 20:51:19.160270    1380 scope.go:117] "RemoveContainer" containerID="4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8"
	May 28 20:51:19 ha-908878 kubelet[1380]: E0528 20:51:19.160749    1380 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d79872e2-b267-446a-99dc-5bf9f398d31c)\"" pod="kube-system/storage-provisioner" podUID="d79872e2-b267-446a-99dc-5bf9f398d31c"
	May 28 20:51:27 ha-908878 kubelet[1380]: I0528 20:51:27.178261    1380 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-908878" podStartSLOduration=10.17823493 podStartE2EDuration="10.17823493s" podCreationTimestamp="2024-05-28 20:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-28 20:51:27.177502805 +0000 UTC m=+740.154900771" watchObservedRunningTime="2024-05-28 20:51:27.17823493 +0000 UTC m=+740.155632895"
	May 28 20:51:32 ha-908878 kubelet[1380]: I0528 20:51:32.160725    1380 scope.go:117] "RemoveContainer" containerID="4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8"
	May 28 20:52:07 ha-908878 kubelet[1380]: E0528 20:52:07.196788    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:52:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:52:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:52:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:52:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 20:52:22.174137   30215 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18966-3963/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-908878 -n ha-908878
helpers_test.go:261: (dbg) Run:  kubectl --context ha-908878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (362.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 stop -v=7 --alsologtostderr
E0528 20:54:42.598156   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-908878 stop -v=7 --alsologtostderr: exit status 82 (2m0.459015668s)

                                                
                                                
-- stdout --
	* Stopping node "ha-908878-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:52:42.766449   30622 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:52:42.766561   30622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:52:42.766572   30622 out.go:304] Setting ErrFile to fd 2...
	I0528 20:52:42.766578   30622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:52:42.766750   30622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:52:42.766967   30622 out.go:298] Setting JSON to false
	I0528 20:52:42.767073   30622 mustload.go:65] Loading cluster: ha-908878
	I0528 20:52:42.767431   30622 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:52:42.767521   30622 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:52:42.767703   30622 mustload.go:65] Loading cluster: ha-908878
	I0528 20:52:42.767843   30622 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:52:42.767871   30622 stop.go:39] StopHost: ha-908878-m04
	I0528 20:52:42.768259   30622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:52:42.768311   30622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:52:42.783368   30622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46855
	I0528 20:52:42.783828   30622 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:52:42.784441   30622 main.go:141] libmachine: Using API Version  1
	I0528 20:52:42.784462   30622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:52:42.784769   30622 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:52:42.787115   30622 out.go:177] * Stopping node "ha-908878-m04"  ...
	I0528 20:52:42.788399   30622 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0528 20:52:42.788441   30622 main.go:141] libmachine: (ha-908878-m04) Calling .DriverName
	I0528 20:52:42.788642   30622 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0528 20:52:42.788662   30622 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHHostname
	I0528 20:52:42.791386   30622 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:52:42.791769   30622 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:52:08 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:52:42.791803   30622 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:52:42.792008   30622 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHPort
	I0528 20:52:42.792166   30622 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHKeyPath
	I0528 20:52:42.792330   30622 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHUsername
	I0528 20:52:42.792461   30622 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m04/id_rsa Username:docker}
	I0528 20:52:42.877069   30622 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0528 20:52:42.929846   30622 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0528 20:52:42.981896   30622 main.go:141] libmachine: Stopping "ha-908878-m04"...
	I0528 20:52:42.981918   30622 main.go:141] libmachine: (ha-908878-m04) Calling .GetState
	I0528 20:52:42.983481   30622 main.go:141] libmachine: (ha-908878-m04) Calling .Stop
	I0528 20:52:42.986889   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 0/120
	I0528 20:52:43.988109   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 1/120
	I0528 20:52:44.989472   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 2/120
	I0528 20:52:45.991076   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 3/120
	I0528 20:52:46.992378   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 4/120
	I0528 20:52:47.994091   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 5/120
	I0528 20:52:48.995390   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 6/120
	I0528 20:52:49.996624   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 7/120
	I0528 20:52:50.997870   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 8/120
	I0528 20:52:51.999344   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 9/120
	I0528 20:52:53.001109   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 10/120
	I0528 20:52:54.002714   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 11/120
	I0528 20:52:55.004177   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 12/120
	I0528 20:52:56.005518   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 13/120
	I0528 20:52:57.006796   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 14/120
	I0528 20:52:58.008609   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 15/120
	I0528 20:52:59.010026   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 16/120
	I0528 20:53:00.012450   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 17/120
	I0528 20:53:01.013863   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 18/120
	I0528 20:53:02.015033   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 19/120
	I0528 20:53:03.017236   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 20/120
	I0528 20:53:04.018712   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 21/120
	I0528 20:53:05.020492   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 22/120
	I0528 20:53:06.021723   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 23/120
	I0528 20:53:07.024050   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 24/120
	I0528 20:53:08.025928   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 25/120
	I0528 20:53:09.028402   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 26/120
	I0528 20:53:10.029828   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 27/120
	I0528 20:53:11.031096   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 28/120
	I0528 20:53:12.033143   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 29/120
	I0528 20:53:13.035277   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 30/120
	I0528 20:53:14.036613   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 31/120
	I0528 20:53:15.038187   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 32/120
	I0528 20:53:16.039449   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 33/120
	I0528 20:53:17.041011   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 34/120
	I0528 20:53:18.042966   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 35/120
	I0528 20:53:19.044996   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 36/120
	I0528 20:53:20.046613   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 37/120
	I0528 20:53:21.048105   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 38/120
	I0528 20:53:22.049521   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 39/120
	I0528 20:53:23.051504   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 40/120
	I0528 20:53:24.053026   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 41/120
	I0528 20:53:25.054427   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 42/120
	I0528 20:53:26.056279   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 43/120
	I0528 20:53:27.057862   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 44/120
	I0528 20:53:28.059795   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 45/120
	I0528 20:53:29.061900   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 46/120
	I0528 20:53:30.064106   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 47/120
	I0528 20:53:31.065432   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 48/120
	I0528 20:53:32.066876   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 49/120
	I0528 20:53:33.068785   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 50/120
	I0528 20:53:34.070140   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 51/120
	I0528 20:53:35.072284   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 52/120
	I0528 20:53:36.073547   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 53/120
	I0528 20:53:37.075318   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 54/120
	I0528 20:53:38.077162   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 55/120
	I0528 20:53:39.078491   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 56/120
	I0528 20:53:40.079880   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 57/120
	I0528 20:53:41.081150   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 58/120
	I0528 20:53:42.082642   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 59/120
	I0528 20:53:43.084432   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 60/120
	I0528 20:53:44.085855   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 61/120
	I0528 20:53:45.087433   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 62/120
	I0528 20:53:46.088764   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 63/120
	I0528 20:53:47.090306   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 64/120
	I0528 20:53:48.092141   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 65/120
	I0528 20:53:49.093598   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 66/120
	I0528 20:53:50.094884   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 67/120
	I0528 20:53:51.096842   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 68/120
	I0528 20:53:52.098397   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 69/120
	I0528 20:53:53.100547   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 70/120
	I0528 20:53:54.101892   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 71/120
	I0528 20:53:55.104265   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 72/120
	I0528 20:53:56.105557   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 73/120
	I0528 20:53:57.107164   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 74/120
	I0528 20:53:58.109227   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 75/120
	I0528 20:53:59.110844   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 76/120
	I0528 20:54:00.112185   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 77/120
	I0528 20:54:01.114168   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 78/120
	I0528 20:54:02.115521   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 79/120
	I0528 20:54:03.117264   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 80/120
	I0528 20:54:04.119101   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 81/120
	I0528 20:54:05.120473   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 82/120
	I0528 20:54:06.122132   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 83/120
	I0528 20:54:07.123542   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 84/120
	I0528 20:54:08.125359   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 85/120
	I0528 20:54:09.126776   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 86/120
	I0528 20:54:10.128260   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 87/120
	I0528 20:54:11.129438   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 88/120
	I0528 20:54:12.131218   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 89/120
	I0528 20:54:13.133309   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 90/120
	I0528 20:54:14.134571   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 91/120
	I0528 20:54:15.135966   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 92/120
	I0528 20:54:16.137275   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 93/120
	I0528 20:54:17.138575   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 94/120
	I0528 20:54:18.140514   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 95/120
	I0528 20:54:19.141929   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 96/120
	I0528 20:54:20.143271   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 97/120
	I0528 20:54:21.144546   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 98/120
	I0528 20:54:22.146232   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 99/120
	I0528 20:54:23.148320   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 100/120
	I0528 20:54:24.150595   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 101/120
	I0528 20:54:25.151886   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 102/120
	I0528 20:54:26.154179   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 103/120
	I0528 20:54:27.155381   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 104/120
	I0528 20:54:28.157204   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 105/120
	I0528 20:54:29.158647   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 106/120
	I0528 20:54:30.159905   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 107/120
	I0528 20:54:31.161187   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 108/120
	I0528 20:54:32.162532   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 109/120
	I0528 20:54:33.164233   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 110/120
	I0528 20:54:34.165705   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 111/120
	I0528 20:54:35.166871   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 112/120
	I0528 20:54:36.168299   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 113/120
	I0528 20:54:37.170105   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 114/120
	I0528 20:54:38.171923   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 115/120
	I0528 20:54:39.173266   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 116/120
	I0528 20:54:40.174773   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 117/120
	I0528 20:54:41.176116   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 118/120
	I0528 20:54:42.177542   30622 main.go:141] libmachine: (ha-908878-m04) Waiting for machine to stop 119/120
	I0528 20:54:43.178470   30622 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0528 20:54:43.178528   30622 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0528 20:54:43.180224   30622 out.go:177] 
	W0528 20:54:43.181525   30622 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0528 20:54:43.181537   30622 out.go:239] * 
	* 
	W0528 20:54:43.183982   30622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 20:54:43.185162   30622 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-908878 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr: exit status 3 (18.951648685s)

                                                
                                                
-- stdout --
	ha-908878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-908878-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:54:43.229279   31069 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:54:43.229373   31069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:54:43.229384   31069 out.go:304] Setting ErrFile to fd 2...
	I0528 20:54:43.229388   31069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:54:43.229571   31069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:54:43.229725   31069 out.go:298] Setting JSON to false
	I0528 20:54:43.229751   31069 mustload.go:65] Loading cluster: ha-908878
	I0528 20:54:43.229842   31069 notify.go:220] Checking for updates...
	I0528 20:54:43.230479   31069 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:54:43.230546   31069 status.go:255] checking status of ha-908878 ...
	I0528 20:54:43.231603   31069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:54:43.231658   31069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:54:43.259225   31069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I0528 20:54:43.259677   31069 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:54:43.260248   31069 main.go:141] libmachine: Using API Version  1
	I0528 20:54:43.260270   31069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:54:43.260635   31069 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:54:43.260815   31069 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:54:43.262563   31069 status.go:330] ha-908878 host status = "Running" (err=<nil>)
	I0528 20:54:43.262580   31069 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:54:43.262858   31069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:54:43.262894   31069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:54:43.276980   31069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33541
	I0528 20:54:43.277398   31069 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:54:43.277819   31069 main.go:141] libmachine: Using API Version  1
	I0528 20:54:43.277841   31069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:54:43.278196   31069 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:54:43.278394   31069 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:54:43.280984   31069 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:54:43.281374   31069 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:54:43.281405   31069 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:54:43.281462   31069 host.go:66] Checking if "ha-908878" exists ...
	I0528 20:54:43.281731   31069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:54:43.281780   31069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:54:43.295673   31069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40807
	I0528 20:54:43.296073   31069 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:54:43.296495   31069 main.go:141] libmachine: Using API Version  1
	I0528 20:54:43.296514   31069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:54:43.296769   31069 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:54:43.296908   31069 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:54:43.297094   31069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:54:43.297117   31069 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:54:43.299655   31069 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:54:43.300070   31069 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:54:43.300102   31069 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:54:43.300185   31069 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:54:43.300346   31069 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:54:43.300460   31069 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:54:43.300620   31069 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:54:43.387856   31069 ssh_runner.go:195] Run: systemctl --version
	I0528 20:54:43.398008   31069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:54:43.415770   31069 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:54:43.415795   31069 api_server.go:166] Checking apiserver status ...
	I0528 20:54:43.415830   31069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:54:43.437622   31069 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5153/cgroup
	W0528 20:54:43.448844   31069 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5153/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:54:43.448906   31069 ssh_runner.go:195] Run: ls
	I0528 20:54:43.453428   31069 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:54:43.457568   31069 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:54:43.457584   31069 status.go:422] ha-908878 apiserver status = Running (err=<nil>)
	I0528 20:54:43.457594   31069 status.go:257] ha-908878 status: &{Name:ha-908878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:54:43.457621   31069 status.go:255] checking status of ha-908878-m02 ...
	I0528 20:54:43.458033   31069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:54:43.458078   31069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:54:43.473225   31069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0528 20:54:43.473575   31069 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:54:43.474010   31069 main.go:141] libmachine: Using API Version  1
	I0528 20:54:43.474030   31069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:54:43.474319   31069 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:54:43.474496   31069 main.go:141] libmachine: (ha-908878-m02) Calling .GetState
	I0528 20:54:43.476049   31069 status.go:330] ha-908878-m02 host status = "Running" (err=<nil>)
	I0528 20:54:43.476065   31069 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:54:43.476335   31069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:54:43.476387   31069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:54:43.490061   31069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44393
	I0528 20:54:43.490420   31069 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:54:43.490854   31069 main.go:141] libmachine: Using API Version  1
	I0528 20:54:43.490874   31069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:54:43.491224   31069 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:54:43.491421   31069 main.go:141] libmachine: (ha-908878-m02) Calling .GetIP
	I0528 20:54:43.493967   31069 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:54:43.494400   31069 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:50:10 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:54:43.494427   31069 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:54:43.494535   31069 host.go:66] Checking if "ha-908878-m02" exists ...
	I0528 20:54:43.494907   31069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:54:43.494946   31069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:54:43.508788   31069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I0528 20:54:43.509187   31069 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:54:43.509596   31069 main.go:141] libmachine: Using API Version  1
	I0528 20:54:43.509614   31069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:54:43.509891   31069 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:54:43.510064   31069 main.go:141] libmachine: (ha-908878-m02) Calling .DriverName
	I0528 20:54:43.510210   31069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:54:43.510228   31069 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHHostname
	I0528 20:54:43.512874   31069 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:54:43.513290   31069 main.go:141] libmachine: (ha-908878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:bd:28", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:50:10 +0000 UTC Type:0 Mac:52:54:00:b4:bd:28 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:ha-908878-m02 Clientid:01:52:54:00:b4:bd:28}
	I0528 20:54:43.513329   31069 main.go:141] libmachine: (ha-908878-m02) DBG | domain ha-908878-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:b4:bd:28 in network mk-ha-908878
	I0528 20:54:43.513472   31069 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHPort
	I0528 20:54:43.513704   31069 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHKeyPath
	I0528 20:54:43.513886   31069 main.go:141] libmachine: (ha-908878-m02) Calling .GetSSHUsername
	I0528 20:54:43.514034   31069 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m02/id_rsa Username:docker}
	I0528 20:54:43.598815   31069 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:54:43.615052   31069 kubeconfig.go:125] found "ha-908878" server: "https://192.168.39.254:8443"
	I0528 20:54:43.615077   31069 api_server.go:166] Checking apiserver status ...
	I0528 20:54:43.615111   31069 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:54:43.629222   31069 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1481/cgroup
	W0528 20:54:43.637943   31069 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1481/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 20:54:43.637997   31069 ssh_runner.go:195] Run: ls
	I0528 20:54:43.642049   31069 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0528 20:54:43.646308   31069 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0528 20:54:43.646325   31069 status.go:422] ha-908878-m02 apiserver status = Running (err=<nil>)
	I0528 20:54:43.646332   31069 status.go:257] ha-908878-m02 status: &{Name:ha-908878-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 20:54:43.646344   31069 status.go:255] checking status of ha-908878-m04 ...
	I0528 20:54:43.646614   31069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:54:43.646642   31069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:54:43.661600   31069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36827
	I0528 20:54:43.662034   31069 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:54:43.662504   31069 main.go:141] libmachine: Using API Version  1
	I0528 20:54:43.662526   31069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:54:43.662848   31069 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:54:43.663037   31069 main.go:141] libmachine: (ha-908878-m04) Calling .GetState
	I0528 20:54:43.664535   31069 status.go:330] ha-908878-m04 host status = "Running" (err=<nil>)
	I0528 20:54:43.664549   31069 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:54:43.664809   31069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:54:43.664853   31069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:54:43.679438   31069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36653
	I0528 20:54:43.679764   31069 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:54:43.680190   31069 main.go:141] libmachine: Using API Version  1
	I0528 20:54:43.680206   31069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:54:43.680507   31069 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:54:43.680695   31069 main.go:141] libmachine: (ha-908878-m04) Calling .GetIP
	I0528 20:54:43.683252   31069 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:54:43.683677   31069 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:52:08 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:54:43.683702   31069 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:54:43.683828   31069 host.go:66] Checking if "ha-908878-m04" exists ...
	I0528 20:54:43.684226   31069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:54:43.684263   31069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:54:43.697733   31069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0528 20:54:43.698074   31069 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:54:43.698462   31069 main.go:141] libmachine: Using API Version  1
	I0528 20:54:43.698481   31069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:54:43.698752   31069 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:54:43.698910   31069 main.go:141] libmachine: (ha-908878-m04) Calling .DriverName
	I0528 20:54:43.699065   31069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:54:43.699082   31069 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHHostname
	I0528 20:54:43.701647   31069 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:54:43.702078   31069 main.go:141] libmachine: (ha-908878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:1f:24", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:52:08 +0000 UTC Type:0 Mac:52:54:00:dd:1f:24 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:ha-908878-m04 Clientid:01:52:54:00:dd:1f:24}
	I0528 20:54:43.702120   31069 main.go:141] libmachine: (ha-908878-m04) DBG | domain ha-908878-m04 has defined IP address 192.168.39.38 and MAC address 52:54:00:dd:1f:24 in network mk-ha-908878
	I0528 20:54:43.702206   31069 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHPort
	I0528 20:54:43.702476   31069 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHKeyPath
	I0528 20:54:43.702607   31069 main.go:141] libmachine: (ha-908878-m04) Calling .GetSSHUsername
	I0528 20:54:43.702739   31069 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878-m04/id_rsa Username:docker}
	W0528 20:55:02.137984   31069 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.38:22: connect: no route to host
	W0528 20:55:02.138057   31069 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	E0528 20:55:02.138070   31069 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host
	I0528 20:55:02.138078   31069 status.go:257] ha-908878-m04 status: &{Name:ha-908878-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0528 20:55:02.138112   31069 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.38:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-908878 -n ha-908878
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-908878 logs -n 25: (1.634277454s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-908878 ssh -n ha-908878-m02 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m03_ha-908878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04:/home/docker/cp-test_ha-908878-m03_ha-908878-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m04 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m03_ha-908878-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-908878 cp testdata/cp-test.txt                                                | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3657915045/001/cp-test_ha-908878-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878:/home/docker/cp-test_ha-908878-m04_ha-908878.txt                       |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878 sudo cat                                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m04_ha-908878.txt                                 |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m02:/home/docker/cp-test_ha-908878-m04_ha-908878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m02 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m04_ha-908878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m03:/home/docker/cp-test_ha-908878-m04_ha-908878-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n                                                                 | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | ha-908878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-908878 ssh -n ha-908878-m03 sudo cat                                          | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC | 28 May 24 20:42 UTC |
	|         | /home/docker/cp-test_ha-908878-m04_ha-908878-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-908878 node stop m02 -v=7                                                     | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-908878 node start m02 -v=7                                                    | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-908878 -v=7                                                           | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-908878 -v=7                                                                | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:46 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-908878 --wait=true -v=7                                                    | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:48 UTC | 28 May 24 20:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-908878                                                                | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:52 UTC |                     |
	| node    | ha-908878 node delete m03 -v=7                                                   | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:52 UTC | 28 May 24 20:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-908878 stop -v=7                                                              | ha-908878 | jenkins | v1.33.1 | 28 May 24 20:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 20:48:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 20:48:24.204068   28866 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:48:24.204182   28866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:48:24.204192   28866 out.go:304] Setting ErrFile to fd 2...
	I0528 20:48:24.204197   28866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:48:24.204371   28866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:48:24.204894   28866 out.go:298] Setting JSON to false
	I0528 20:48:24.205878   28866 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1847,"bootTime":1716927457,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 20:48:24.205930   28866 start.go:139] virtualization: kvm guest
	I0528 20:48:24.208220   28866 out.go:177] * [ha-908878] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 20:48:24.209581   28866 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 20:48:24.210757   28866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 20:48:24.209558   28866 notify.go:220] Checking for updates...
	I0528 20:48:24.213081   28866 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:48:24.214389   28866 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:48:24.215613   28866 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 20:48:24.216762   28866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 20:48:24.218304   28866 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:48:24.218390   28866 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 20:48:24.218728   28866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:48:24.218768   28866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:48:24.235219   28866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I0528 20:48:24.235557   28866 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:48:24.236084   28866 main.go:141] libmachine: Using API Version  1
	I0528 20:48:24.236107   28866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:48:24.236505   28866 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:48:24.236707   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:48:24.270430   28866 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 20:48:24.271599   28866 start.go:297] selected driver: kvm2
	I0528 20:48:24.271612   28866 start.go:901] validating driver "kvm2" against &{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.38 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:48:24.271860   28866 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 20:48:24.272196   28866 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:48:24.272280   28866 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 20:48:24.286440   28866 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 20:48:24.287048   28866 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:48:24.287101   28866 cni.go:84] Creating CNI manager for ""
	I0528 20:48:24.287112   28866 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0528 20:48:24.287165   28866 start.go:340] cluster config:
	{Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.38 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:48:24.287281   28866 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:48:24.288882   28866 out.go:177] * Starting "ha-908878" primary control-plane node in "ha-908878" cluster
	I0528 20:48:24.289973   28866 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:48:24.289997   28866 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 20:48:24.290018   28866 cache.go:56] Caching tarball of preloaded images
	I0528 20:48:24.290075   28866 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 20:48:24.290085   28866 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 20:48:24.290184   28866 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/config.json ...
	I0528 20:48:24.290372   28866 start.go:360] acquireMachinesLock for ha-908878: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 20:48:24.290409   28866 start.go:364] duration metric: took 19.609µs to acquireMachinesLock for "ha-908878"
	I0528 20:48:24.290422   28866 start.go:96] Skipping create...Using existing machine configuration
	I0528 20:48:24.290433   28866 fix.go:54] fixHost starting: 
	I0528 20:48:24.290683   28866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:48:24.290713   28866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:48:24.303677   28866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I0528 20:48:24.304146   28866 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:48:24.304580   28866 main.go:141] libmachine: Using API Version  1
	I0528 20:48:24.304601   28866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:48:24.304847   28866 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:48:24.305032   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:48:24.305185   28866 main.go:141] libmachine: (ha-908878) Calling .GetState
	I0528 20:48:24.306689   28866 fix.go:112] recreateIfNeeded on ha-908878: state=Running err=<nil>
	W0528 20:48:24.306720   28866 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 20:48:24.308351   28866 out.go:177] * Updating the running kvm2 "ha-908878" VM ...
	I0528 20:48:24.309711   28866 machine.go:94] provisionDockerMachine start ...
	I0528 20:48:24.309726   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:48:24.309912   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:48:24.312242   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.312751   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:24.312796   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.312942   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:48:24.313118   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:24.313242   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:24.313410   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:48:24.313596   28866 main.go:141] libmachine: Using SSH client type: native
	I0528 20:48:24.313832   28866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:48:24.313847   28866 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 20:48:24.431595   28866 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-908878
	
	I0528 20:48:24.431622   28866 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:48:24.431843   28866 buildroot.go:166] provisioning hostname "ha-908878"
	I0528 20:48:24.431866   28866 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:48:24.432014   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:48:24.434759   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.435153   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:24.435173   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.435360   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:48:24.435525   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:24.435671   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:24.435821   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:48:24.435981   28866 main.go:141] libmachine: Using SSH client type: native
	I0528 20:48:24.436195   28866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:48:24.436213   28866 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-908878 && echo "ha-908878" | sudo tee /etc/hostname
	I0528 20:48:24.570044   28866 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-908878
	
	I0528 20:48:24.570074   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:48:24.572713   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.573136   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:24.573167   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.573302   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:48:24.573494   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:24.573632   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:24.573753   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:48:24.573935   28866 main.go:141] libmachine: Using SSH client type: native
	I0528 20:48:24.574096   28866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:48:24.574112   28866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-908878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-908878/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-908878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 20:48:24.690302   28866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:48:24.690357   28866 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 20:48:24.690406   28866 buildroot.go:174] setting up certificates
	I0528 20:48:24.690420   28866 provision.go:84] configureAuth start
	I0528 20:48:24.690437   28866 main.go:141] libmachine: (ha-908878) Calling .GetMachineName
	I0528 20:48:24.690679   28866 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:48:24.693174   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.693527   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:24.693575   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.693628   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:48:24.695683   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.696050   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:24.696075   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:24.696190   28866 provision.go:143] copyHostCerts
	I0528 20:48:24.696220   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:48:24.696258   28866 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 20:48:24.696272   28866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 20:48:24.696332   28866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 20:48:24.696429   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:48:24.696457   28866 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 20:48:24.696467   28866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 20:48:24.696495   28866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 20:48:24.696547   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:48:24.696563   28866 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 20:48:24.696569   28866 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 20:48:24.696589   28866 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 20:48:24.696647   28866 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.ha-908878 san=[127.0.0.1 192.168.39.100 ha-908878 localhost minikube]
	I0528 20:48:25.053830   28866 provision.go:177] copyRemoteCerts
	I0528 20:48:25.053893   28866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 20:48:25.053914   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:48:25.056270   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:25.056647   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:25.056676   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:25.056851   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:48:25.057052   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:25.057219   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:48:25.057370   28866 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:48:25.140614   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0528 20:48:25.140675   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 20:48:25.168749   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0528 20:48:25.168820   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0528 20:48:25.198869   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0528 20:48:25.198914   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 20:48:25.222989   28866 provision.go:87] duration metric: took 532.546897ms to configureAuth
	I0528 20:48:25.223010   28866 buildroot.go:189] setting minikube options for container-runtime
	I0528 20:48:25.223203   28866 config.go:182] Loaded profile config "ha-908878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:48:25.223281   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:48:25.225960   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:25.226327   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:48:25.226356   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:48:25.226494   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:48:25.226678   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:25.226802   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:48:25.226904   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:48:25.227090   28866 main.go:141] libmachine: Using SSH client type: native
	I0528 20:48:25.227237   28866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:48:25.227252   28866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 20:49:56.091138   28866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 20:49:56.091170   28866 machine.go:97] duration metric: took 1m31.781448103s to provisionDockerMachine
	I0528 20:49:56.091182   28866 start.go:293] postStartSetup for "ha-908878" (driver="kvm2")
	I0528 20:49:56.091191   28866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 20:49:56.091204   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:49:56.091547   28866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 20:49:56.091572   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:49:56.094605   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.095049   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:56.095071   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.095230   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:49:56.095444   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:49:56.095608   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:49:56.095707   28866 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:49:56.181650   28866 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 20:49:56.186256   28866 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 20:49:56.186286   28866 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 20:49:56.186355   28866 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 20:49:56.186465   28866 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 20:49:56.186479   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /etc/ssl/certs/117602.pem
	I0528 20:49:56.186665   28866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 20:49:56.195827   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:49:56.220860   28866 start.go:296] duration metric: took 129.669198ms for postStartSetup
	I0528 20:49:56.220890   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:49:56.221158   28866 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0528 20:49:56.221181   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:49:56.224021   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.224400   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:56.224444   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.224596   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:49:56.224785   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:49:56.224949   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:49:56.225145   28866 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	W0528 20:49:56.307945   28866 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0528 20:49:56.307970   28866 fix.go:56] duration metric: took 1m32.017539741s for fixHost
	I0528 20:49:56.307994   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:49:56.310432   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.310825   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:56.310852   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.310976   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:49:56.311163   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:49:56.311349   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:49:56.311508   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:49:56.311670   28866 main.go:141] libmachine: Using SSH client type: native
	I0528 20:49:56.311831   28866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0528 20:49:56.311842   28866 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 20:49:56.422632   28866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716929396.383641572
	
	I0528 20:49:56.422656   28866 fix.go:216] guest clock: 1716929396.383641572
	I0528 20:49:56.422666   28866 fix.go:229] Guest: 2024-05-28 20:49:56.383641572 +0000 UTC Remote: 2024-05-28 20:49:56.30797848 +0000 UTC m=+92.137253979 (delta=75.663092ms)
	I0528 20:49:56.422699   28866 fix.go:200] guest clock delta is within tolerance: 75.663092ms
	I0528 20:49:56.422706   28866 start.go:83] releasing machines lock for "ha-908878", held for 1m32.132288075s
	I0528 20:49:56.422726   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:49:56.422958   28866 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:49:56.425582   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.425998   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:56.426023   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.426192   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:49:56.426646   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:49:56.426809   28866 main.go:141] libmachine: (ha-908878) Calling .DriverName
	I0528 20:49:56.426880   28866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 20:49:56.426919   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:49:56.426985   28866 ssh_runner.go:195] Run: cat /version.json
	I0528 20:49:56.427002   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHHostname
	I0528 20:49:56.429527   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.429864   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:56.429891   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.429910   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.430039   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:49:56.430212   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:49:56.430336   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:56.430343   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:49:56.430373   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:56.430548   28866 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:49:56.430612   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHPort
	I0528 20:49:56.430723   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHKeyPath
	I0528 20:49:56.430860   28866 main.go:141] libmachine: (ha-908878) Calling .GetSSHUsername
	I0528 20:49:56.430996   28866 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/ha-908878/id_rsa Username:docker}
	I0528 20:49:56.538071   28866 ssh_runner.go:195] Run: systemctl --version
	I0528 20:49:56.559683   28866 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 20:49:56.764634   28866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 20:49:56.771238   28866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 20:49:56.771287   28866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 20:49:56.780671   28866 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0528 20:49:56.780688   28866 start.go:494] detecting cgroup driver to use...
	I0528 20:49:56.780750   28866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 20:49:56.797765   28866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 20:49:56.810485   28866 docker.go:217] disabling cri-docker service (if available) ...
	I0528 20:49:56.810535   28866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 20:49:56.823744   28866 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 20:49:56.836390   28866 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 20:49:56.993900   28866 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 20:49:57.146929   28866 docker.go:233] disabling docker service ...
	I0528 20:49:57.146999   28866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 20:49:57.164741   28866 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 20:49:57.178852   28866 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 20:49:57.333890   28866 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 20:49:57.478649   28866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 20:49:57.492514   28866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 20:49:57.511447   28866 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 20:49:57.511516   28866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.521824   28866 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 20:49:57.521888   28866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.532010   28866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.542076   28866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.552040   28866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 20:49:57.565791   28866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.575541   28866 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.588966   28866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 20:49:57.598707   28866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 20:49:57.607571   28866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 20:49:57.616222   28866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:49:57.758765   28866 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 20:49:58.061074   28866 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 20:49:58.061145   28866 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 20:49:58.066417   28866 start.go:562] Will wait 60s for crictl version
	I0528 20:49:58.066464   28866 ssh_runner.go:195] Run: which crictl
	I0528 20:49:58.070375   28866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 20:49:58.111248   28866 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 20:49:58.111323   28866 ssh_runner.go:195] Run: crio --version
	I0528 20:49:58.141803   28866 ssh_runner.go:195] Run: crio --version
	I0528 20:49:58.178059   28866 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 20:49:58.179295   28866 main.go:141] libmachine: (ha-908878) Calling .GetIP
	I0528 20:49:58.181831   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:58.182159   28866 main.go:141] libmachine: (ha-908878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:73:cb", ip: ""} in network mk-ha-908878: {Iface:virbr1 ExpiryTime:2024-05-28 21:38:42 +0000 UTC Type:0 Mac:52:54:00:bc:73:cb Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-908878 Clientid:01:52:54:00:bc:73:cb}
	I0528 20:49:58.182205   28866 main.go:141] libmachine: (ha-908878) DBG | domain ha-908878 has defined IP address 192.168.39.100 and MAC address 52:54:00:bc:73:cb in network mk-ha-908878
	I0528 20:49:58.182381   28866 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 20:49:58.187120   28866 kubeadm.go:877] updating cluster {Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.38 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 20:49:58.187247   28866 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:49:58.187283   28866 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 20:49:58.232156   28866 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 20:49:58.232175   28866 crio.go:433] Images already preloaded, skipping extraction
	I0528 20:49:58.232230   28866 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 20:49:58.264195   28866 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 20:49:58.264215   28866 cache_images.go:84] Images are preloaded, skipping loading
	I0528 20:49:58.264222   28866 kubeadm.go:928] updating node { 192.168.39.100 8443 v1.30.1 crio true true} ...
	I0528 20:49:58.264333   28866 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-908878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 20:49:58.264396   28866 ssh_runner.go:195] Run: crio config
	I0528 20:49:58.308515   28866 cni.go:84] Creating CNI manager for ""
	I0528 20:49:58.308537   28866 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0528 20:49:58.308557   28866 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 20:49:58.308586   28866 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-908878 NodeName:ha-908878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 20:49:58.308707   28866 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-908878"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 20:49:58.308725   28866 kube-vip.go:115] generating kube-vip config ...
	I0528 20:49:58.308760   28866 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0528 20:49:58.320326   28866 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0528 20:49:58.320437   28866 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0528 20:49:58.320498   28866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 20:49:58.329728   28866 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 20:49:58.329798   28866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0528 20:49:58.338940   28866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0528 20:49:58.355343   28866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 20:49:58.370740   28866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0528 20:49:58.386731   28866 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0528 20:49:58.404742   28866 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0528 20:49:58.409148   28866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:49:58.554304   28866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:49:58.575430   28866 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878 for IP: 192.168.39.100
	I0528 20:49:58.575448   28866 certs.go:194] generating shared ca certs ...
	I0528 20:49:58.575469   28866 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:49:58.575612   28866 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 20:49:58.575651   28866 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 20:49:58.575660   28866 certs.go:256] generating profile certs ...
	I0528 20:49:58.575727   28866 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/client.key
	I0528 20:49:58.575755   28866 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.f57a1a49
	I0528 20:49:58.575767   28866 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.f57a1a49 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.239 192.168.39.73 192.168.39.254]
	I0528 20:49:58.804038   28866 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.f57a1a49 ...
	I0528 20:49:58.804073   28866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.f57a1a49: {Name:mk40040315213a61d76b8a4de8750cbacbede3cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:49:58.804238   28866 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.f57a1a49 ...
	I0528 20:49:58.804253   28866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.f57a1a49: {Name:mkb53dfe536b42922018e47461d6b9031ae3259c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:49:58.804314   28866 certs.go:381] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt.f57a1a49 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt
	I0528 20:49:58.804494   28866 certs.go:385] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key.f57a1a49 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key
	I0528 20:49:58.804621   28866 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key
	I0528 20:49:58.804637   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 20:49:58.804649   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0528 20:49:58.804662   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 20:49:58.804677   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 20:49:58.804691   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 20:49:58.804704   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 20:49:58.804720   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 20:49:58.804732   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 20:49:58.804779   28866 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 20:49:58.804805   28866 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 20:49:58.804814   28866 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 20:49:58.804833   28866 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 20:49:58.804865   28866 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 20:49:58.804889   28866 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 20:49:58.804925   28866 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 20:49:58.804962   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:49:58.804976   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem -> /usr/share/ca-certificates/11760.pem
	I0528 20:49:58.804989   28866 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /usr/share/ca-certificates/117602.pem
	I0528 20:49:58.805524   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 20:49:58.830404   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 20:49:58.853329   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 20:49:58.876952   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 20:49:58.899682   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0528 20:49:58.921851   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 20:49:58.944472   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 20:49:58.967558   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/ha-908878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 20:49:58.989598   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 20:49:59.012181   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 20:49:59.034280   28866 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 20:49:59.056612   28866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 20:49:59.072298   28866 ssh_runner.go:195] Run: openssl version
	I0528 20:49:59.077801   28866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 20:49:59.088173   28866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 20:49:59.092364   28866 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 20:49:59.092404   28866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 20:49:59.097732   28866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 20:49:59.106953   28866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 20:49:59.117339   28866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 20:49:59.121543   28866 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 20:49:59.121588   28866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 20:49:59.126958   28866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 20:49:59.136435   28866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 20:49:59.147918   28866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:49:59.152210   28866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:49:59.152245   28866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:49:59.157937   28866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 20:49:59.167758   28866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 20:49:59.172177   28866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 20:49:59.177508   28866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 20:49:59.182920   28866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 20:49:59.188238   28866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 20:49:59.193393   28866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 20:49:59.198779   28866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 20:49:59.204157   28866 kubeadm.go:391] StartCluster: {Name:ha-908878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-908878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.73 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.38 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:49:59.206385   28866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 20:49:59.206445   28866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 20:49:59.244213   28866 cri.go:89] found id: "5af8a1a44bf9b9407851d48a083b47557b4c872cdfd4995bcbad87344ac95a9c"
	I0528 20:49:59.244239   28866 cri.go:89] found id: "cb43a6411985dc31db5a9076b261726f846c9e3a2a6b14211128785dfa10a0d0"
	I0528 20:49:59.244243   28866 cri.go:89] found id: "f949602c90086db46304946ba677992a2ad4ee9ff44cc88b1780dd33f3a90fba"
	I0528 20:49:59.244247   28866 cri.go:89] found id: "8e652d16bcddb4efaa826971f662ae9d9b0c10496a7ad32cdc523787a676111c"
	I0528 20:49:59.244250   28866 cri.go:89] found id: "7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6"
	I0528 20:49:59.244253   28866 cri.go:89] found id: "0b6fe231fc7dbce25d01bf248b4e363bd410d67b785874c03d67b9895b05cd8d"
	I0528 20:49:59.244255   28866 cri.go:89] found id: "2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9"
	I0528 20:49:59.244258   28866 cri.go:89] found id: "a7ea51bf984916bead29c664359f65ea790360f73bf15486bdf96c758189bd69"
	I0528 20:49:59.244260   28866 cri.go:89] found id: "97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe"
	I0528 20:49:59.244265   28866 cri.go:89] found id: "20cf414ed60510910dc01761f0afc869180c224d4e26872a468b10e603e8e786"
	I0528 20:49:59.244268   28866 cri.go:89] found id: "05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9"
	I0528 20:49:59.244271   28866 cri.go:89] found id: "aece72d9b21aa208ac4ce8512a57af63132d5882a8675650a980fa6b531af247"
	I0528 20:49:59.244273   28866 cri.go:89] found id: "650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14"
	I0528 20:49:59.244275   28866 cri.go:89] found id: "f926e075722f1f9cbd1b9f6559baa2071c1f539f729ddcb3572a87f64a8934e9"
	I0528 20:49:59.244281   28866 cri.go:89] found id: ""
	I0528 20:49:59.244319   28866 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.742176082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929702742151128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8a8d7f8-5d60-495e-8de8-2c5aea498a4b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.742601186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=007067e6-2981-4635-99ec-7cb7b99a8a04 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.742676546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=007067e6-2981-4635-99ec-7cb7b99a8a04 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.743164450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:23d89b9262db69731a4648e821b6ee02ddbcd64e953f442b0b1c790ad99e06bb,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716929492179728308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba9034a620e1e980ba83271c2686d0e2cc1672f83aa4b19c3789a2bcda09a040,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716929468185478357,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716929446183340156,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5457554337f0db189ea45788d726f4b927d69bfc466967041382543f02be8b80,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716929446177076854,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f796b4c1fcb382b678803794da8228990266f95adb1e99f62cb07eb6a2dc2b0e,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716929443175818326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805069c4ea3f52b056a6db0b2cbc48c32b9aec82e2eced5975d131a3d6813894,PodSandboxId:4cffb5b7c6c9c681ffed44e3b79a1b4db97beb4e3bc56f7c0bebbf9be6e48c4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716929433445060154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c6e92ce9a6765f9775e85692043edbc3cacb6d1eb3f9c07f81dc5fc71305a5,PodSandboxId:144ffb432d19700f4db1dbee070861a5bafb881fb8aaf7bd4c4b4a06bebe57fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716929411246533502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e83b4276fb38a7bed5e82c53c2dba82,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:278ab03af8f2313b514521259d04b44733ed754d0d537ac169ab50c94bc2944c,PodSandboxId:ea611d2d609918a13a673876b8f432aa70108f7177ebae9950b9f6eccbdc2ab9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716929400629960416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe5284b
67f85e03d1cd00b308ec28baa2dd3f031d560ef8bcee3e352afbd4f0,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716929400347691445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4514d4354f5473ce91d574749050b5534564282802b
dbac026aa2ea297033f90,PodSandboxId:3a8f3b7df90322d86bb148e5f38eae2fe33ca7873be9f745c0c6db25143dc42a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400480532978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a1aa224cb0622f2a2e529a50b304df8ff9e2738e1dc6698099e3a06dba136,PodSandboxId:c7cb03481617d80cb9f9dcef56558b44a28163ee1857e6c9900a3ff7ef9db308,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716929400208165709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c540689ad07f7ee30a68b6951597a2a7519d5077f0d3603cb0a035ebaab6dc15,PodSandboxId:d59153638107de764c3747df65809d3cdc474479ba91b817e5d7b3c598f84cb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400194256948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512ee36cfc30be3bbea5a2852da22e2052a8bc2f3c461e7e0f1c5bdab2356ceb,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716929400105679968,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f676094
14,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7611fb5205e4379847769c096d7bee8dd1dcbf6921b3cd3e9e213bd651f0650a,PodSandboxId:bfa5f863df0d89a8dc8be0920e4334f7380837022415720c4d0b630df3fc2adf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716929400018305553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[
string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eea72764c6cef58a30efce77aa95fcea5c0414983c225f7e96112a5a8c65c5a,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716929399958628122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kuber
netes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716928912917661950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766590753062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766572153338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716928761367501777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716928741088331292,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716928740991520694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=007067e6-2981-4635-99ec-7cb7b99a8a04 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.787712200Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0971178-7536-435e-85f0-4a89e234ad1a name=/runtime.v1.RuntimeService/Version
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.787787028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0971178-7536-435e-85f0-4a89e234ad1a name=/runtime.v1.RuntimeService/Version
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.789020965Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ea19a4c-5569-4f58-898e-636f89764d60 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.789469411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929702789446306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ea19a4c-5569-4f58-898e-636f89764d60 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.790120525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b63ccae4-c969-4d59-9b0b-a3ed50645396 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.790195529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b63ccae4-c969-4d59-9b0b-a3ed50645396 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.790598139Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:23d89b9262db69731a4648e821b6ee02ddbcd64e953f442b0b1c790ad99e06bb,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716929492179728308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba9034a620e1e980ba83271c2686d0e2cc1672f83aa4b19c3789a2bcda09a040,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716929468185478357,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716929446183340156,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5457554337f0db189ea45788d726f4b927d69bfc466967041382543f02be8b80,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716929446177076854,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f796b4c1fcb382b678803794da8228990266f95adb1e99f62cb07eb6a2dc2b0e,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716929443175818326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805069c4ea3f52b056a6db0b2cbc48c32b9aec82e2eced5975d131a3d6813894,PodSandboxId:4cffb5b7c6c9c681ffed44e3b79a1b4db97beb4e3bc56f7c0bebbf9be6e48c4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716929433445060154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c6e92ce9a6765f9775e85692043edbc3cacb6d1eb3f9c07f81dc5fc71305a5,PodSandboxId:144ffb432d19700f4db1dbee070861a5bafb881fb8aaf7bd4c4b4a06bebe57fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716929411246533502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e83b4276fb38a7bed5e82c53c2dba82,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:278ab03af8f2313b514521259d04b44733ed754d0d537ac169ab50c94bc2944c,PodSandboxId:ea611d2d609918a13a673876b8f432aa70108f7177ebae9950b9f6eccbdc2ab9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716929400629960416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe5284b
67f85e03d1cd00b308ec28baa2dd3f031d560ef8bcee3e352afbd4f0,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716929400347691445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4514d4354f5473ce91d574749050b5534564282802b
dbac026aa2ea297033f90,PodSandboxId:3a8f3b7df90322d86bb148e5f38eae2fe33ca7873be9f745c0c6db25143dc42a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400480532978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a1aa224cb0622f2a2e529a50b304df8ff9e2738e1dc6698099e3a06dba136,PodSandboxId:c7cb03481617d80cb9f9dcef56558b44a28163ee1857e6c9900a3ff7ef9db308,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716929400208165709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c540689ad07f7ee30a68b6951597a2a7519d5077f0d3603cb0a035ebaab6dc15,PodSandboxId:d59153638107de764c3747df65809d3cdc474479ba91b817e5d7b3c598f84cb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400194256948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512ee36cfc30be3bbea5a2852da22e2052a8bc2f3c461e7e0f1c5bdab2356ceb,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716929400105679968,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f676094
14,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7611fb5205e4379847769c096d7bee8dd1dcbf6921b3cd3e9e213bd651f0650a,PodSandboxId:bfa5f863df0d89a8dc8be0920e4334f7380837022415720c4d0b630df3fc2adf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716929400018305553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[
string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eea72764c6cef58a30efce77aa95fcea5c0414983c225f7e96112a5a8c65c5a,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716929399958628122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kuber
netes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716928912917661950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766590753062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766572153338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716928761367501777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716928741088331292,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716928740991520694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b63ccae4-c969-4d59-9b0b-a3ed50645396 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.835526817Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd857d6e-1c42-4edf-893c-e90c516d751b name=/runtime.v1.RuntimeService/Version
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.835600102Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd857d6e-1c42-4edf-893c-e90c516d751b name=/runtime.v1.RuntimeService/Version
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.837113043Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04f5f326-f6e8-4c84-b33e-3550c4908623 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.837697201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929702837668600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04f5f326-f6e8-4c84-b33e-3550c4908623 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.838542696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85c3968c-ab0e-40f4-afa3-92babc026749 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.838624171Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85c3968c-ab0e-40f4-afa3-92babc026749 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.839089748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:23d89b9262db69731a4648e821b6ee02ddbcd64e953f442b0b1c790ad99e06bb,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716929492179728308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba9034a620e1e980ba83271c2686d0e2cc1672f83aa4b19c3789a2bcda09a040,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716929468185478357,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716929446183340156,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5457554337f0db189ea45788d726f4b927d69bfc466967041382543f02be8b80,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716929446177076854,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f796b4c1fcb382b678803794da8228990266f95adb1e99f62cb07eb6a2dc2b0e,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716929443175818326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805069c4ea3f52b056a6db0b2cbc48c32b9aec82e2eced5975d131a3d6813894,PodSandboxId:4cffb5b7c6c9c681ffed44e3b79a1b4db97beb4e3bc56f7c0bebbf9be6e48c4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716929433445060154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c6e92ce9a6765f9775e85692043edbc3cacb6d1eb3f9c07f81dc5fc71305a5,PodSandboxId:144ffb432d19700f4db1dbee070861a5bafb881fb8aaf7bd4c4b4a06bebe57fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716929411246533502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e83b4276fb38a7bed5e82c53c2dba82,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:278ab03af8f2313b514521259d04b44733ed754d0d537ac169ab50c94bc2944c,PodSandboxId:ea611d2d609918a13a673876b8f432aa70108f7177ebae9950b9f6eccbdc2ab9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716929400629960416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe5284b
67f85e03d1cd00b308ec28baa2dd3f031d560ef8bcee3e352afbd4f0,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716929400347691445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4514d4354f5473ce91d574749050b5534564282802b
dbac026aa2ea297033f90,PodSandboxId:3a8f3b7df90322d86bb148e5f38eae2fe33ca7873be9f745c0c6db25143dc42a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400480532978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a1aa224cb0622f2a2e529a50b304df8ff9e2738e1dc6698099e3a06dba136,PodSandboxId:c7cb03481617d80cb9f9dcef56558b44a28163ee1857e6c9900a3ff7ef9db308,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716929400208165709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c540689ad07f7ee30a68b6951597a2a7519d5077f0d3603cb0a035ebaab6dc15,PodSandboxId:d59153638107de764c3747df65809d3cdc474479ba91b817e5d7b3c598f84cb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400194256948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512ee36cfc30be3bbea5a2852da22e2052a8bc2f3c461e7e0f1c5bdab2356ceb,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716929400105679968,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f676094
14,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7611fb5205e4379847769c096d7bee8dd1dcbf6921b3cd3e9e213bd651f0650a,PodSandboxId:bfa5f863df0d89a8dc8be0920e4334f7380837022415720c4d0b630df3fc2adf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716929400018305553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[
string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eea72764c6cef58a30efce77aa95fcea5c0414983c225f7e96112a5a8c65c5a,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716929399958628122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kuber
netes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716928912917661950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766590753062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766572153338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716928761367501777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716928741088331292,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716928740991520694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85c3968c-ab0e-40f4-afa3-92babc026749 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.884832145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a75f39c-5047-4f21-9d55-003fcdb7826b name=/runtime.v1.RuntimeService/Version
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.884959703Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a75f39c-5047-4f21-9d55-003fcdb7826b name=/runtime.v1.RuntimeService/Version
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.886366248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b61f7dd3-a2bc-4ad4-99f3-768af2d17551 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.887092676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716929702887057302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b61f7dd3-a2bc-4ad4-99f3-768af2d17551 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.890165812Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18e972d4-8904-424f-9892-dbf33431c6d0 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.890252318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18e972d4-8904-424f-9892-dbf33431c6d0 name=/runtime.v1.RuntimeService/ListContainers
	May 28 20:55:02 ha-908878 crio[3860]: time="2024-05-28 20:55:02.890834511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:23d89b9262db69731a4648e821b6ee02ddbcd64e953f442b0b1c790ad99e06bb,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716929492179728308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba9034a620e1e980ba83271c2686d0e2cc1672f83aa4b19c3789a2bcda09a040,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716929468185478357,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8,PodSandboxId:a10360900a6684a37e5da1e4b53126389f65ecdb98ad83f88412addfdf85e402,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716929446183340156,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d79872e2-b267-446a-99dc-5bf9f398d31c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6aac92,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5457554337f0db189ea45788d726f4b927d69bfc466967041382543f02be8b80,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716929446177076854,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f796b4c1fcb382b678803794da8228990266f95adb1e99f62cb07eb6a2dc2b0e,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716929443175818326,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f67609414,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:805069c4ea3f52b056a6db0b2cbc48c32b9aec82e2eced5975d131a3d6813894,PodSandboxId:4cffb5b7c6c9c681ffed44e3b79a1b4db97beb4e3bc56f7c0bebbf9be6e48c4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716929433445060154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kubernetes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c6e92ce9a6765f9775e85692043edbc3cacb6d1eb3f9c07f81dc5fc71305a5,PodSandboxId:144ffb432d19700f4db1dbee070861a5bafb881fb8aaf7bd4c4b4a06bebe57fb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1716929411246533502,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e83b4276fb38a7bed5e82c53c2dba82,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:278ab03af8f2313b514521259d04b44733ed754d0d537ac169ab50c94bc2944c,PodSandboxId:ea611d2d609918a13a673876b8f432aa70108f7177ebae9950b9f6eccbdc2ab9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716929400629960416,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbe5284b
67f85e03d1cd00b308ec28baa2dd3f031d560ef8bcee3e352afbd4f0,PodSandboxId:cf4ac2dd6e8f4c464322a6fed240ad979ba60c14ae71f12bcd94430d7a8d2028,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716929400347691445,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4mzh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8069a7ea-0ab1-4064-b982-867dbdfd97aa,},Annotations:map[string]string{io.kubernetes.container.hash: 730aff3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4514d4354f5473ce91d574749050b5534564282802b
dbac026aa2ea297033f90,PodSandboxId:3a8f3b7df90322d86bb148e5f38eae2fe33ca7873be9f745c0c6db25143dc42a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400480532978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a1aa224cb0622f2a2e529a50b304df8ff9e2738e1dc6698099e3a06dba136,PodSandboxId:c7cb03481617d80cb9f9dcef56558b44a28163ee1857e6c9900a3ff7ef9db308,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716929400208165709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c540689ad07f7ee30a68b6951597a2a7519d5077f0d3603cb0a035ebaab6dc15,PodSandboxId:d59153638107de764c3747df65809d3cdc474479ba91b817e5d7b3c598f84cb1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716929400194256948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"}
,{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:512ee36cfc30be3bbea5a2852da22e2052a8bc2f3c461e7e0f1c5bdab2356ceb,PodSandboxId:fc72b589827f1a220bf5901fb646f2f7056f1d2ebd15499fbaf01fc162c34ed1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716929400105679968,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09c574865fe7260be39c1b7f676094
14,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7611fb5205e4379847769c096d7bee8dd1dcbf6921b3cd3e9e213bd651f0650a,PodSandboxId:bfa5f863df0d89a8dc8be0920e4334f7380837022415720c4d0b630df3fc2adf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716929400018305553,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[
string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eea72764c6cef58a30efce77aa95fcea5c0414983c225f7e96112a5a8c65c5a,PodSandboxId:a140a8d8883554f2f866690c6b175764121c70dbd919a9954e74a7003deaccca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716929399958628122,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 386e57d3e5870130913571793cc3ee94,},Annotations:map[string]string{io.kuber
netes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c83dd481e567399f2a52a7fbc48eccc99c0cb418fa571fd10df44f361a9f39,PodSandboxId:dfbac4c22bc27b76f3af221ef78b5eb13bffba587238a75f6978ec32db0d2669,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716928912917661950,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-ljbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3a49d7b7-d8ae-44a8-8393-51781cf73591,},Annotations:map[string]string{io.kuberne
tes.container.hash: 1fd948c0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6,PodSandboxId:fb8a83ba500b4c9c50bcef1be750bfc7e5a7db94160b054581efcafa03519147,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766590753062,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mvx67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b51beb7-0397-4008-b878-97edd41c6b94,},Annotations:map[string]string{io.kubernetes.container.hash: 10935396,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9,PodSandboxId:5333c6894c4460cb033cd36d59ef316b9716e099ae9997c69bc25e869c87b03d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716928766572153338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-5fmns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41a3bda1-29ba-4982-baf5-0adc97b4eb45,},Annotations:map[string]string{io.kubernetes.container.hash: dc173f12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe,PodSandboxId:2a5f076d2569c649f3e0bcb898d11d4c19a26e70dc189cb8501def24b83cdcec,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716928761367501777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ng8mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0b1264-09c7-44b2-ba8c-e145e825fdbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3bac967e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9,PodSandboxId:54beb07b658e504e1a48a48b45286f12adc245e7ae631b29d6fdd4debbc5e82a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f
9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716928741088331292,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f1c036da804e98d08f1608248bc0a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14,PodSandboxId:232d528c76896db38a96f5da2a0ed0761f4f63d26ec15dd06b5b221600ccabff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1716928740991520694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-908878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35af7b91312437941e451d7e638db460,},Annotations:map[string]string{io.kubernetes.container.hash: cd1ed920,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18e972d4-8904-424f-9892-dbf33431c6d0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	23d89b9262db6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   a10360900a668       storage-provisioner
	ba9034a620e1e       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               3                   cf4ac2dd6e8f4       kindnet-x4mzh
	4de963c4394e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   a10360900a668       storage-provisioner
	5457554337f0d       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      4 minutes ago       Running             kube-apiserver            3                   a140a8d888355       kube-apiserver-ha-908878
	f796b4c1fcb38       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      4 minutes ago       Running             kube-controller-manager   2                   fc72b589827f1       kube-controller-manager-ha-908878
	805069c4ea3f5       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   4cffb5b7c6c9c       busybox-fc5497c4f-ljbzs
	41c6e92ce9a67       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   144ffb432d197       kube-vip-ha-908878
	278ab03af8f23       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      5 minutes ago       Running             kube-proxy                1                   ea611d2d60991       kube-proxy-ng8mq
	4514d4354f547       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   3a8f3b7df9032       coredns-7db6d8ff4d-mvx67
	bbe5284b67f85       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      5 minutes ago       Exited              kindnet-cni               2                   cf4ac2dd6e8f4       kindnet-x4mzh
	7d3a1aa224cb0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   c7cb03481617d       etcd-ha-908878
	c540689ad07f7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   d59153638107d       coredns-7db6d8ff4d-5fmns
	512ee36cfc30b       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      5 minutes ago       Exited              kube-controller-manager   1                   fc72b589827f1       kube-controller-manager-ha-908878
	7611fb5205e43       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      5 minutes ago       Running             kube-scheduler            1                   bfa5f863df0d8       kube-scheduler-ha-908878
	1eea72764c6ce       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      5 minutes ago       Exited              kube-apiserver            2                   a140a8d888355       kube-apiserver-ha-908878
	92c83dd481e56       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   dfbac4c22bc27       busybox-fc5497c4f-ljbzs
	7c38e07fa546e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   fb8a83ba500b4       coredns-7db6d8ff4d-mvx67
	2470320e3bec5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   5333c6894c446       coredns-7db6d8ff4d-5fmns
	97ba5f2725852       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      15 minutes ago      Exited              kube-proxy                0                   2a5f076d2569c       kube-proxy-ng8mq
	05d5882852e6e       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      16 minutes ago      Exited              kube-scheduler            0                   54beb07b658e5       kube-scheduler-ha-908878
	650c6f374c3b3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   232d528c76896       etcd-ha-908878
	
	
	==> coredns [2470320e3bec5ccf0d58f255e4bc440917dfb3fae12b2a48365113a51d7dd6e9] <==
	[INFO] 10.244.2.2:41613 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002489251s
	[INFO] 10.244.2.2:55408 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147549s
	[INFO] 10.244.0.4:57170 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000374705s
	[INFO] 10.244.0.4:58966 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155963s
	[INFO] 10.244.0.4:35423 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111865s
	[INFO] 10.244.1.2:37835 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079714s
	[INFO] 10.244.1.2:45922 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128914s
	[INFO] 10.244.2.2:49120 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102234s
	[INFO] 10.244.2.2:59817 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113316s
	[INFO] 10.244.1.2:33990 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104132s
	[INFO] 10.244.1.2:57343 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065665s
	[INFO] 10.244.1.2:37008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000144249s
	[INFO] 10.244.2.2:57641 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201576s
	[INFO] 10.244.0.4:55430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016202s
	[INFO] 10.244.0.4:58197 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154574s
	[INFO] 10.244.0.4:43002 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000159971s
	[INFO] 10.244.1.2:33008 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159565s
	[INFO] 10.244.1.2:55799 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000106231s
	[INFO] 10.244.1.2:34935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119985s
	[INFO] 10.244.1.2:55524 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077247s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4514d4354f5473ce91d574749050b5534564282802bdbac026aa2ea297033f90] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:52778->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:52778->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:52788->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:52788->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7c38e07fa546e3574439db0d8c463f8598be9b5ee1b022328a10e812abe191a6] <==
	[INFO] 10.244.2.2:58602 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170027s
	[INFO] 10.244.0.4:43029 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001811296s
	[INFO] 10.244.0.4:49612 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098819s
	[INFO] 10.244.0.4:33728 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000042492s
	[INFO] 10.244.0.4:34284 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001158314s
	[INFO] 10.244.0.4:52540 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045508s
	[INFO] 10.244.1.2:36534 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139592s
	[INFO] 10.244.1.2:55059 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00181265s
	[INFO] 10.244.1.2:57133 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001147785s
	[INFO] 10.244.1.2:59156 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008284s
	[INFO] 10.244.1.2:56011 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189969s
	[INFO] 10.244.1.2:57157 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076075s
	[INFO] 10.244.2.2:38176 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112538s
	[INFO] 10.244.2.2:54457 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111343s
	[INFO] 10.244.0.4:46728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104994s
	[INFO] 10.244.0.4:49514 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077463s
	[INFO] 10.244.0.4:40805 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103396s
	[INFO] 10.244.0.4:41445 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093035s
	[INFO] 10.244.1.2:48615 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169745s
	[INFO] 10.244.2.2:39740 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00022698s
	[INFO] 10.244.2.2:42139 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000182159s
	[INFO] 10.244.2.2:54665 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00035602s
	[INFO] 10.244.0.4:33063 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104255s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c540689ad07f7ee30a68b6951597a2a7519d5077f0d3603cb0a035ebaab6dc15] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:60436->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:60436->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-908878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T20_39_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:55:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:50:50 +0000   Tue, 28 May 2024 20:39:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:50:50 +0000   Tue, 28 May 2024 20:39:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:50:50 +0000   Tue, 28 May 2024 20:39:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:50:50 +0000   Tue, 28 May 2024 20:39:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-908878
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a470f4bebd094a03b2a08db3a205d097
	  System UUID:                a470f4be-bd09-4a03-b2a0-8db3a205d097
	  Boot ID:                    e5dc2485-8c44-4c4f-899c-7eb02750525b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ljbzs              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-5fmns             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-mvx67             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-908878                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-x4mzh                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-908878             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-908878    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-ng8mq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-908878             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-908878                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 15m    kube-proxy       
	  Normal   Starting                 4m14s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    15m    kubelet          Node ha-908878 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m    kubelet          Node ha-908878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m    kubelet          Node ha-908878 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m    node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Normal   NodeReady                15m    kubelet          Node ha-908878 status is now: NodeReady
	  Normal   RegisteredNode           14m    node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Normal   RegisteredNode           13m    node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Warning  ContainerGCFailed        5m56s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m6s   node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Normal   RegisteredNode           4m3s   node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	  Normal   RegisteredNode           3m9s   node-controller  Node ha-908878 event: Registered Node ha-908878 in Controller
	
	
	Name:               ha-908878-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T20_40_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:40:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:54:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 20:53:43 +0000   Tue, 28 May 2024 20:53:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 20:53:43 +0000   Tue, 28 May 2024 20:53:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 20:53:43 +0000   Tue, 28 May 2024 20:53:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 20:53:43 +0000   Tue, 28 May 2024 20:53:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    ha-908878-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f91cea3af174de9a05db650e4662bbb
	  System UUID:                8f91cea3-af17-4de9-a05d-b650e4662bbb
	  Boot ID:                    6b8d7163-e895-4f42-9b4a-9c98cd4f26a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rfl74                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-908878-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-6prxw                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-908878-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-908878-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-pg89k                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-908878-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-908878-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 3m56s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-908878-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-908878-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-908878-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                    node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-908878-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m41s (x8 over 4m41s)  kubelet          Node ha-908878-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m41s (x8 over 4m41s)  kubelet          Node ha-908878-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m41s (x7 over 4m41s)  kubelet          Node ha-908878-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-908878-m02 event: Registered Node ha-908878-m02 in Controller
	  Normal  NodeNotReady             110s                   node-controller  Node ha-908878-m02 status is now: NodeNotReady
	
	
	Name:               ha-908878-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-908878-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=ha-908878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T20_42_26_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:42:25 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-908878-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 20:52:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 28 May 2024 20:52:14 +0000   Tue, 28 May 2024 20:53:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 28 May 2024 20:52:14 +0000   Tue, 28 May 2024 20:53:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 28 May 2024 20:52:14 +0000   Tue, 28 May 2024 20:53:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 28 May 2024 20:52:14 +0000   Tue, 28 May 2024 20:53:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.38
	  Hostname:    ha-908878-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3c86941732a4e078803ce72d6cca1eb
	  System UUID:                f3c86941-732a-4e07-8803-ce72d6cca1eb
	  Boot ID:                    43d26a07-1717-47b3-b09c-28f8499f97e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6z4nb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-68kxq              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-bnh2w           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-908878-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-908878-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-908878-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-908878-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m5s                   node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal   RegisteredNode           4m3s                   node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal   NodeNotReady             3m25s                  node-controller  Node ha-908878-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m9s                   node-controller  Node ha-908878-m04 event: Registered Node ha-908878-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m49s (x2 over 2m49s)  kubelet          Node ha-908878-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x2 over 2m49s)  kubelet          Node ha-908878-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x2 over 2m49s)  kubelet          Node ha-908878-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s                  kubelet          Node ha-908878-m04 has been rebooted, boot id: 43d26a07-1717-47b3-b09c-28f8499f97e0
	  Normal   NodeReady                2m49s                  kubelet          Node ha-908878-m04 status is now: NodeReady
	  Normal   NodeNotReady             108s                   node-controller  Node ha-908878-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.578430] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.054216] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052934] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.180850] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.119729] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.261744] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.070195] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +5.007183] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[  +0.062643] kauditd_printk_skb: 158 callbacks suppressed
	[May28 20:39] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.085155] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.532403] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.860818] kauditd_printk_skb: 38 callbacks suppressed
	[May28 20:40] kauditd_printk_skb: 24 callbacks suppressed
	[May28 20:49] systemd-fstab-generator[3779]: Ignoring "noauto" option for root device
	[  +0.157839] systemd-fstab-generator[3791]: Ignoring "noauto" option for root device
	[  +0.181723] systemd-fstab-generator[3805]: Ignoring "noauto" option for root device
	[  +0.155164] systemd-fstab-generator[3817]: Ignoring "noauto" option for root device
	[  +0.272755] systemd-fstab-generator[3845]: Ignoring "noauto" option for root device
	[  +0.796593] systemd-fstab-generator[3958]: Ignoring "noauto" option for root device
	[May28 20:50] kauditd_printk_skb: 223 callbacks suppressed
	[ +11.598626] kauditd_printk_skb: 1 callbacks suppressed
	[ +39.340094] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [650c6f374c3b3e6697c77d68d23551a1335fae362d137a464df72e2bc23e4e14] <==
	2024/05/28 20:48:25 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-28T20:48:25.37315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"667.06506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-05-28T20:48:25.373212Z","caller":"traceutil/trace.go:171","msg":"trace[370914826] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; }","duration":"667.160516ms","start":"2024-05-28T20:48:24.706046Z","end":"2024-05-28T20:48:25.373207Z","steps":["trace[370914826] 'agreement among raft nodes before linearized reading'  (duration: 667.088851ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T20:48:25.37329Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T20:48:24.706033Z","time spent":"667.211188ms","remote":"127.0.0.1:45608","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:500 "}
	2024/05/28 20:48:25 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-28T20:48:25.432024Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T20:48:25.43244Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-28T20:48:25.432546Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3276445ff8d31e34","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-28T20:48:25.432815Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.432917Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.432974Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.433131Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.433234Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.43327Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.4333Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"a38e88b5839dc078"}
	{"level":"info","ts":"2024-05-28T20:48:25.433308Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.433316Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.43337Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.433467Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.433512Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.43354Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.43355Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:48:25.436166Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-05-28T20:48:25.436301Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-05-28T20:48:25.436333Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-908878","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	
	
	==> etcd [7d3a1aa224cb0622f2a2e529a50b304df8ff9e2738e1dc6698099e3a06dba136] <==
	{"level":"warn","ts":"2024-05-28T20:51:36.472089Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"94cfe90357540c6b","rtt":"0s","error":"dial tcp 192.168.39.73:2380: connect: connection refused"}
	{"level":"info","ts":"2024-05-28T20:51:36.561915Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:51:36.562033Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:51:36.562188Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:51:36.594302Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3276445ff8d31e34","to":"94cfe90357540c6b","stream-type":"stream Message"}
	{"level":"info","ts":"2024-05-28T20:51:36.594368Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:51:36.59472Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3276445ff8d31e34","to":"94cfe90357540c6b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-05-28T20:51:36.594745Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:52:28.485299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 switched to configuration voters=(3636168928135421492 11785507588053778552)"}
	{"level":"info","ts":"2024-05-28T20:52:28.487349Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","removed-remote-peer-id":"94cfe90357540c6b","removed-remote-peer-urls":["https://192.168.39.73:2380"]}
	{"level":"info","ts":"2024-05-28T20:52:28.487462Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"94cfe90357540c6b"}
	{"level":"warn","ts":"2024-05-28T20:52:28.488064Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:52:28.488175Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"94cfe90357540c6b"}
	{"level":"warn","ts":"2024-05-28T20:52:28.488466Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:52:28.488492Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:52:28.488657Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"warn","ts":"2024-05-28T20:52:28.488851Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b","error":"context canceled"}
	{"level":"warn","ts":"2024-05-28T20:52:28.489004Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"94cfe90357540c6b","error":"failed to read 94cfe90357540c6b on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-05-28T20:52:28.489043Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"warn","ts":"2024-05-28T20:52:28.48918Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b","error":"context canceled"}
	{"level":"info","ts":"2024-05-28T20:52:28.489227Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:52:28.489244Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"94cfe90357540c6b"}
	{"level":"info","ts":"2024-05-28T20:52:28.489257Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"3276445ff8d31e34","removed-remote-peer-id":"94cfe90357540c6b"}
	{"level":"warn","ts":"2024-05-28T20:52:28.503835Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"3276445ff8d31e34","remote-peer-id-stream-handler":"3276445ff8d31e34","remote-peer-id-from":"94cfe90357540c6b"}
	{"level":"warn","ts":"2024-05-28T20:52:28.504689Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.73:55178","server-name":"","error":"read tcp 192.168.39.100:2380->192.168.39.73:55178: read: connection reset by peer"}
	
	
	==> kernel <==
	 20:55:03 up 16 min,  0 users,  load average: 0.57, 0.60, 0.38
	Linux ha-908878 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ba9034a620e1e980ba83271c2686d0e2cc1672f83aa4b19c3789a2bcda09a040] <==
	I0528 20:54:19.330447       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	I0528 20:54:29.338389       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0528 20:54:29.338609       1 main.go:227] handling current node
	I0528 20:54:29.338652       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0528 20:54:29.338674       1 main.go:250] Node ha-908878-m02 has CIDR [10.244.1.0/24] 
	I0528 20:54:29.338784       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0528 20:54:29.338802       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	I0528 20:54:39.349431       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0528 20:54:39.349488       1 main.go:227] handling current node
	I0528 20:54:39.349508       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0528 20:54:39.349516       1 main.go:250] Node ha-908878-m02 has CIDR [10.244.1.0/24] 
	I0528 20:54:39.349718       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0528 20:54:39.349759       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	I0528 20:54:49.355943       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0528 20:54:49.355981       1 main.go:227] handling current node
	I0528 20:54:49.355993       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0528 20:54:49.355998       1 main.go:250] Node ha-908878-m02 has CIDR [10.244.1.0/24] 
	I0528 20:54:49.356111       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0528 20:54:49.356135       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	I0528 20:54:59.366851       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0528 20:54:59.366968       1 main.go:227] handling current node
	I0528 20:54:59.367005       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0528 20:54:59.367041       1 main.go:250] Node ha-908878-m02 has CIDR [10.244.1.0/24] 
	I0528 20:54:59.367174       1 main.go:223] Handling node with IPs: map[192.168.39.38:{}]
	I0528 20:54:59.367208       1 main.go:250] Node ha-908878-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bbe5284b67f85e03d1cd00b308ec28baa2dd3f031d560ef8bcee3e352afbd4f0] <==
	I0528 20:50:01.030652       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0528 20:50:18.658985       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0528 20:50:21.729380       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0528 20:50:27.875149       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0528 20:50:30.945338       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0528 20:50:33.947984       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kube-apiserver [1eea72764c6cef58a30efce77aa95fcea5c0414983c225f7e96112a5a8c65c5a] <==
	I0528 20:50:00.730975       1 options.go:221] external host was not specified, using 192.168.39.100
	I0528 20:50:00.732047       1 server.go:148] Version: v1.30.1
	I0528 20:50:00.732119       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:50:01.884835       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0528 20:50:01.894243       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 20:50:01.897938       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0528 20:50:01.897973       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0528 20:50:01.898154       1 instance.go:299] Using reconciler: lease
	W0528 20:50:21.883703       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0528 20:50:21.883820       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0528 20:50:21.898845       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0528 20:50:21.898859       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [5457554337f0db189ea45788d726f4b927d69bfc466967041382543f02be8b80] <==
	I0528 20:50:48.111147       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0528 20:50:48.182270       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0528 20:50:48.195819       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 20:50:48.195927       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0528 20:50:48.195957       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0528 20:50:48.195929       1 policy_source.go:224] refreshing policies
	I0528 20:50:48.196420       1 shared_informer.go:320] Caches are synced for configmaps
	I0528 20:50:48.198598       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0528 20:50:48.200454       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0528 20:50:48.200644       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0528 20:50:48.207918       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0528 20:50:48.211133       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0528 20:50:48.211226       1 aggregator.go:165] initial CRD sync complete...
	I0528 20:50:48.211289       1 autoregister_controller.go:141] Starting autoregister controller
	I0528 20:50:48.211330       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0528 20:50:48.211355       1 cache.go:39] Caches are synced for autoregister controller
	I0528 20:50:48.287070       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0528 20:50:48.303666       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.239 192.168.39.73]
	I0528 20:50:48.305017       1 controller.go:615] quota admission added evaluator for: endpoints
	I0528 20:50:48.321855       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0528 20:50:48.331594       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0528 20:50:49.108022       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0528 20:50:49.549403       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.239 192.168.39.73]
	W0528 20:50:59.550731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.239]
	W0528 20:52:39.559797       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.239]
	
	
	==> kube-controller-manager [512ee36cfc30be3bbea5a2852da22e2052a8bc2f3c461e7e0f1c5bdab2356ceb] <==
	I0528 20:50:01.470257       1 serving.go:380] Generated self-signed cert in-memory
	I0528 20:50:02.164786       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0528 20:50:02.164832       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:50:02.166698       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0528 20:50:02.166847       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 20:50:02.167131       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0528 20:50:02.167273       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0528 20:50:22.906158       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.100:8443/healthz\": dial tcp 192.168.39.100:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f796b4c1fcb382b678803794da8228990266f95adb1e99f62cb07eb6a2dc2b0e] <==
	I0528 20:52:27.939195       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="199.348µs"
	I0528 20:52:27.952511       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.476µs"
	I0528 20:52:27.960160       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="643.179µs"
	I0528 20:52:28.043514       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.41315ms"
	I0528 20:52:28.044222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.589µs"
	I0528 20:52:40.717361       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-908878-m04"
	E0528 20:52:40.742865       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"storage.k8s.io/v1", Kind:"CSINode", Name:"ha-908878-m03", UID:"f2c3e977-ab8d-432b-8824-4ea8a520b6d0", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_
:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-908878-m03", UID:"1bc3046e-6006-4be0-98fc-a1a44f2fd40e", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io "ha-908878-m03" not found
	E0528 20:52:40.750851       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"coordination.k8s.io/v1", Kind:"Lease", Name:"ha-908878-m03", UID:"4b786f41-1534-4cb5-934b-201ae8b5e070", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"kube-node-lease"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerW
ait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-908878-m03", UID:"1bc3046e-6006-4be0-98fc-a1a44f2fd40e", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io "ha-908878-m03" not found
	E0528 20:53:00.452468       1 gc_controller.go:153] "Failed to get node" err="node \"ha-908878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-908878-m03"
	E0528 20:53:00.452547       1 gc_controller.go:153] "Failed to get node" err="node \"ha-908878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-908878-m03"
	E0528 20:53:00.452558       1 gc_controller.go:153] "Failed to get node" err="node \"ha-908878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-908878-m03"
	E0528 20:53:00.452566       1 gc_controller.go:153] "Failed to get node" err="node \"ha-908878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-908878-m03"
	E0528 20:53:00.452575       1 gc_controller.go:153] "Failed to get node" err="node \"ha-908878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-908878-m03"
	I0528 20:53:13.184519       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-908878-m04"
	I0528 20:53:13.356152       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.422288ms"
	I0528 20:53:13.357441       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="132.606µs"
	I0528 20:53:15.525510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.337368ms"
	I0528 20:53:15.525844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.513µs"
	E0528 20:53:20.453232       1 gc_controller.go:153] "Failed to get node" err="node \"ha-908878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-908878-m03"
	E0528 20:53:20.453288       1 gc_controller.go:153] "Failed to get node" err="node \"ha-908878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-908878-m03"
	E0528 20:53:20.453295       1 gc_controller.go:153] "Failed to get node" err="node \"ha-908878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-908878-m03"
	E0528 20:53:20.453300       1 gc_controller.go:153] "Failed to get node" err="node \"ha-908878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-908878-m03"
	E0528 20:53:20.453305       1 gc_controller.go:153] "Failed to get node" err="node \"ha-908878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-908878-m03"
	I0528 20:53:42.824308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.043997ms"
	I0528 20:53:42.825974       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="163.237µs"
	
	
	==> kube-proxy [278ab03af8f2313b514521259d04b44733ed754d0d537ac169ab50c94bc2944c] <==
	I0528 20:50:02.120613       1 server_linux.go:69] "Using iptables proxy"
	E0528 20:50:04.962702       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-908878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0528 20:50:08.033299       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-908878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0528 20:50:11.105324       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-908878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0528 20:50:17.249326       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-908878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0528 20:50:29.537227       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-908878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0528 20:50:48.510627       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0528 20:50:48.549222       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 20:50:48.549285       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 20:50:48.549303       1 server_linux.go:165] "Using iptables Proxier"
	I0528 20:50:48.552031       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 20:50:48.552207       1 server.go:872] "Version info" version="v1.30.1"
	I0528 20:50:48.552239       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:50:48.554139       1 config.go:192] "Starting service config controller"
	I0528 20:50:48.554172       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 20:50:48.554193       1 config.go:101] "Starting endpoint slice config controller"
	I0528 20:50:48.554197       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 20:50:48.554579       1 config.go:319] "Starting node config controller"
	I0528 20:50:48.554609       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 20:50:48.654934       1 shared_informer.go:320] Caches are synced for node config
	I0528 20:50:48.654981       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 20:50:48.654944       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [97ba5f2725852aa19d200b1005c8bc9678a21a2b0a0b768cbf10f1dde692ecbe] <==
	E0528 20:47:12.929634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:16.001367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:16.001467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:16.001422       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:16.001555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:16.001503       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:16.001629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:22.146230       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:22.146708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:22.146855       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:22.146956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:22.146980       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:22.147075       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:31.361841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:31.362504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:34.434132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:34.434215       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:34.434278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:34.434322       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:55.939092       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:55.939445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1847": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:47:59.009578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:47:59.009782       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-908878&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0528 20:48:02.084397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0528 20:48:02.084635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [05d5882852e6eb482b9e3ba850d60958305fa84234fd741f3345967a57b1c1f9] <==
	W0528 20:48:22.870491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 20:48:22.870524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0528 20:48:23.121957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 20:48:23.122005       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 20:48:23.311635       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0528 20:48:23.311711       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0528 20:48:23.925367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 20:48:23.925435       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0528 20:48:23.958612       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 20:48:23.958655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 20:48:24.050096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 20:48:24.050145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 20:48:24.080678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 20:48:24.080731       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 20:48:24.536591       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 20:48:24.536620       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0528 20:48:24.897653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 20:48:24.897756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 20:48:24.921809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 20:48:24.921966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0528 20:48:25.083149       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 20:48:25.083239       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 20:48:25.237236       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 20:48:25.237327       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 20:48:25.341479       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7611fb5205e4379847769c096d7bee8dd1dcbf6921b3cd3e9e213bd651f0650a] <==
	W0528 20:50:39.924304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.100:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:39.924409       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.100:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:40.634832       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.100:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:40.635023       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.100:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:40.850008       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:40.850111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:41.661066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.100:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:41.661144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.100:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:42.204726       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.100:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:42.204784       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.100:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:42.482457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:42.482515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:43.353411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:43.353472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:43.438638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.100:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:43.438814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.100:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:43.641828       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.100:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:43.642145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.100:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0528 20:50:43.810629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0528 20:50:43.810707       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	I0528 20:51:03.611991       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0528 20:52:25.158726       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6z4nb\": pod busybox-fc5497c4f-6z4nb is already assigned to node \"ha-908878-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-6z4nb" node="ha-908878-m04"
	E0528 20:52:25.169237       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 42bb1345-01ec-4f03-a3fa-8291f685a282(default/busybox-fc5497c4f-6z4nb) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-6z4nb"
	E0528 20:52:25.169480       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6z4nb\": pod busybox-fc5497c4f-6z4nb is already assigned to node \"ha-908878-m04\"" pod="default/busybox-fc5497c4f-6z4nb"
	I0528 20:52:25.169596       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-6z4nb" node="ha-908878-m04"
	
	
	==> kubelet <==
	May 28 20:51:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:51:08 ha-908878 kubelet[1380]: I0528 20:51:08.160775    1380 scope.go:117] "RemoveContainer" containerID="bbe5284b67f85e03d1cd00b308ec28baa2dd3f031d560ef8bcee3e352afbd4f0"
	May 28 20:51:16 ha-908878 kubelet[1380]: I0528 20:51:16.033097    1380 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-ljbzs" podStartSLOduration=564.317045393 podStartE2EDuration="9m27.033040567s" podCreationTimestamp="2024-05-28 20:41:49 +0000 UTC" firstStartedPulling="2024-05-28 20:41:50.183077312 +0000 UTC m=+163.160475270" lastFinishedPulling="2024-05-28 20:41:52.899072498 +0000 UTC m=+165.876470444" observedRunningTime="2024-05-28 20:41:53.90458958 +0000 UTC m=+166.881987543" watchObservedRunningTime="2024-05-28 20:51:16.033040567 +0000 UTC m=+729.010438528"
	May 28 20:51:17 ha-908878 kubelet[1380]: I0528 20:51:17.161859    1380 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-908878" podUID="45e2e6e2-6ea1-4498-aca1-9c3060eb9ca4"
	May 28 20:51:17 ha-908878 kubelet[1380]: I0528 20:51:17.188455    1380 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-908878"
	May 28 20:51:17 ha-908878 kubelet[1380]: I0528 20:51:17.930484    1380 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-908878" podUID="45e2e6e2-6ea1-4498-aca1-9c3060eb9ca4"
	May 28 20:51:19 ha-908878 kubelet[1380]: I0528 20:51:19.160270    1380 scope.go:117] "RemoveContainer" containerID="4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8"
	May 28 20:51:19 ha-908878 kubelet[1380]: E0528 20:51:19.160749    1380 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d79872e2-b267-446a-99dc-5bf9f398d31c)\"" pod="kube-system/storage-provisioner" podUID="d79872e2-b267-446a-99dc-5bf9f398d31c"
	May 28 20:51:27 ha-908878 kubelet[1380]: I0528 20:51:27.178261    1380 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-908878" podStartSLOduration=10.17823493 podStartE2EDuration="10.17823493s" podCreationTimestamp="2024-05-28 20:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-28 20:51:27.177502805 +0000 UTC m=+740.154900771" watchObservedRunningTime="2024-05-28 20:51:27.17823493 +0000 UTC m=+740.155632895"
	May 28 20:51:32 ha-908878 kubelet[1380]: I0528 20:51:32.160725    1380 scope.go:117] "RemoveContainer" containerID="4de963c4394e867a25c17c67c272c43fcf46165892ff733312639139f72f5cc8"
	May 28 20:52:07 ha-908878 kubelet[1380]: E0528 20:52:07.196788    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:52:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:52:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:52:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:52:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:53:07 ha-908878 kubelet[1380]: E0528 20:53:07.190203    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:53:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:53:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:53:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:53:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 20:54:07 ha-908878 kubelet[1380]: E0528 20:54:07.192143    1380 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 20:54:07 ha-908878 kubelet[1380]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 20:54:07 ha-908878 kubelet[1380]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 20:54:07 ha-908878 kubelet[1380]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 20:54:07 ha-908878 kubelet[1380]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 20:55:02.458210   31229 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18966-3963/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-908878 -n ha-908878
helpers_test.go:261: (dbg) Run:  kubectl --context ha-908878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (304.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-869191
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-869191
E0528 21:09:42.598402   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-869191: exit status 82 (2m1.947531143s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-869191-m03"  ...
	* Stopping node "multinode-869191-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-869191" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-869191 --wait=true -v=8 --alsologtostderr
E0528 21:12:37.451371   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-869191 --wait=true -v=8 --alsologtostderr: (3m0.48945946s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-869191
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-869191 -n multinode-869191
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-869191 logs -n 25: (1.584563029s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-869191 cp multinode-869191-m02:/home/docker/cp-test.txt                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2076892289/001/cp-test_multinode-869191-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-869191 cp multinode-869191-m02:/home/docker/cp-test.txt                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191:/home/docker/cp-test_multinode-869191-m02_multinode-869191.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n multinode-869191 sudo cat                                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | /home/docker/cp-test_multinode-869191-m02_multinode-869191.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-869191 cp multinode-869191-m02:/home/docker/cp-test.txt                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03:/home/docker/cp-test_multinode-869191-m02_multinode-869191-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n multinode-869191-m03 sudo cat                                   | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | /home/docker/cp-test_multinode-869191-m02_multinode-869191-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-869191 cp testdata/cp-test.txt                                                | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-869191 cp multinode-869191-m03:/home/docker/cp-test.txt                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2076892289/001/cp-test_multinode-869191-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-869191 cp multinode-869191-m03:/home/docker/cp-test.txt                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191:/home/docker/cp-test_multinode-869191-m03_multinode-869191.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n multinode-869191 sudo cat                                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | /home/docker/cp-test_multinode-869191-m03_multinode-869191.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-869191 cp multinode-869191-m03:/home/docker/cp-test.txt                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m02:/home/docker/cp-test_multinode-869191-m03_multinode-869191-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n multinode-869191-m02 sudo cat                                   | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | /home/docker/cp-test_multinode-869191-m03_multinode-869191-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-869191 node stop m03                                                          | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	| node    | multinode-869191 node start                                                             | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:09 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-869191                                                                | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:09 UTC |                     |
	| stop    | -p multinode-869191                                                                     | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:09 UTC |                     |
	| start   | -p multinode-869191                                                                     | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:11 UTC | 28 May 24 21:14 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-869191                                                                | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:14 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:11:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:11:17.363297   40275 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:11:17.363540   40275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:11:17.363549   40275 out.go:304] Setting ErrFile to fd 2...
	I0528 21:11:17.363553   40275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:11:17.363726   40275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:11:17.364232   40275 out.go:298] Setting JSON to false
	I0528 21:11:17.365099   40275 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3220,"bootTime":1716927457,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:11:17.365154   40275 start.go:139] virtualization: kvm guest
	I0528 21:11:17.367494   40275 out.go:177] * [multinode-869191] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:11:17.368737   40275 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:11:17.369846   40275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:11:17.368781   40275 notify.go:220] Checking for updates...
	I0528 21:11:17.372252   40275 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:11:17.373537   40275 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:11:17.374820   40275 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:11:17.376066   40275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:11:17.377624   40275 config.go:182] Loaded profile config "multinode-869191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:11:17.377704   40275 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:11:17.378126   40275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:11:17.378165   40275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:11:17.401748   40275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I0528 21:11:17.402192   40275 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:11:17.402761   40275 main.go:141] libmachine: Using API Version  1
	I0528 21:11:17.402779   40275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:11:17.403148   40275 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:11:17.403423   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:11:17.438932   40275 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:11:17.440130   40275 start.go:297] selected driver: kvm2
	I0528 21:11:17.440144   40275 start.go:901] validating driver "kvm2" against &{Name:multinode-869191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-869191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:11:17.440290   40275 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:11:17.440600   40275 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:11:17.440662   40275 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:11:17.455368   40275 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:11:17.455994   40275 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:11:17.456057   40275 cni.go:84] Creating CNI manager for ""
	I0528 21:11:17.456068   40275 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0528 21:11:17.456120   40275 start.go:340] cluster config:
	{Name:multinode-869191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-869191 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:11:17.456235   40275 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:11:17.458842   40275 out.go:177] * Starting "multinode-869191" primary control-plane node in "multinode-869191" cluster
	I0528 21:11:17.460122   40275 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:11:17.460156   40275 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 21:11:17.460166   40275 cache.go:56] Caching tarball of preloaded images
	I0528 21:11:17.460230   40275 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:11:17.460240   40275 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 21:11:17.460355   40275 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/config.json ...
	I0528 21:11:17.460540   40275 start.go:360] acquireMachinesLock for multinode-869191: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:11:17.460580   40275 start.go:364] duration metric: took 22.039µs to acquireMachinesLock for "multinode-869191"
	I0528 21:11:17.460594   40275 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:11:17.460605   40275 fix.go:54] fixHost starting: 
	I0528 21:11:17.460972   40275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:11:17.461017   40275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:11:17.475647   40275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46367
	I0528 21:11:17.476091   40275 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:11:17.476604   40275 main.go:141] libmachine: Using API Version  1
	I0528 21:11:17.476631   40275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:11:17.476946   40275 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:11:17.477142   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:11:17.477333   40275 main.go:141] libmachine: (multinode-869191) Calling .GetState
	I0528 21:11:17.478925   40275 fix.go:112] recreateIfNeeded on multinode-869191: state=Running err=<nil>
	W0528 21:11:17.478943   40275 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:11:17.481275   40275 out.go:177] * Updating the running kvm2 "multinode-869191" VM ...
	I0528 21:11:17.482552   40275 machine.go:94] provisionDockerMachine start ...
	I0528 21:11:17.482571   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:11:17.482750   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:11:17.485204   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.485722   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:17.485748   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.485896   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:11:17.486067   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:17.486202   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:17.486329   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:11:17.486464   40275 main.go:141] libmachine: Using SSH client type: native
	I0528 21:11:17.486641   40275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0528 21:11:17.486650   40275 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:11:17.603436   40275 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-869191
	
	I0528 21:11:17.603466   40275 main.go:141] libmachine: (multinode-869191) Calling .GetMachineName
	I0528 21:11:17.603711   40275 buildroot.go:166] provisioning hostname "multinode-869191"
	I0528 21:11:17.603739   40275 main.go:141] libmachine: (multinode-869191) Calling .GetMachineName
	I0528 21:11:17.603917   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:11:17.606504   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.606880   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:17.606921   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.607035   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:11:17.607244   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:17.607526   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:17.607690   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:11:17.607851   40275 main.go:141] libmachine: Using SSH client type: native
	I0528 21:11:17.608043   40275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0528 21:11:17.608060   40275 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-869191 && echo "multinode-869191" | sudo tee /etc/hostname
	I0528 21:11:17.738310   40275 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-869191
	
	I0528 21:11:17.738338   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:11:17.741341   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.741744   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:17.741791   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.741898   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:11:17.742088   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:17.742249   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:17.742403   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:11:17.742584   40275 main.go:141] libmachine: Using SSH client type: native
	I0528 21:11:17.742789   40275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0528 21:11:17.742808   40275 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-869191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-869191/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-869191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:11:17.859295   40275 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:11:17.859323   40275 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:11:17.859364   40275 buildroot.go:174] setting up certificates
	I0528 21:11:17.859371   40275 provision.go:84] configureAuth start
	I0528 21:11:17.859379   40275 main.go:141] libmachine: (multinode-869191) Calling .GetMachineName
	I0528 21:11:17.859780   40275 main.go:141] libmachine: (multinode-869191) Calling .GetIP
	I0528 21:11:17.862547   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.862913   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:17.862936   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.863086   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:11:17.865390   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.865823   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:17.865849   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.866030   40275 provision.go:143] copyHostCerts
	I0528 21:11:17.866061   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:11:17.866102   40275 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:11:17.866118   40275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:11:17.866192   40275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:11:17.866299   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:11:17.866324   40275 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:11:17.866330   40275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:11:17.866369   40275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:11:17.866427   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:11:17.866450   40275 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:11:17.866467   40275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:11:17.866502   40275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:11:17.866563   40275 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.multinode-869191 san=[127.0.0.1 192.168.39.65 localhost minikube multinode-869191]
	I0528 21:11:18.113588   40275 provision.go:177] copyRemoteCerts
	I0528 21:11:18.113648   40275 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:11:18.113679   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:11:18.116825   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:18.117187   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:18.117215   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:18.117378   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:11:18.117568   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:18.117775   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:11:18.117917   40275 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/multinode-869191/id_rsa Username:docker}
	I0528 21:11:18.205121   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0528 21:11:18.205196   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 21:11:18.231910   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0528 21:11:18.231977   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:11:18.256928   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0528 21:11:18.256999   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0528 21:11:18.281918   40275 provision.go:87] duration metric: took 422.532986ms to configureAuth
	I0528 21:11:18.281957   40275 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:11:18.282194   40275 config.go:182] Loaded profile config "multinode-869191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:11:18.282274   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:11:18.284945   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:18.285317   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:18.285344   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:18.285519   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:11:18.285727   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:18.285876   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:18.286011   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:11:18.286154   40275 main.go:141] libmachine: Using SSH client type: native
	I0528 21:11:18.286313   40275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0528 21:11:18.286327   40275 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:12:49.170890   40275 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:12:49.170921   40275 machine.go:97] duration metric: took 1m31.688354057s to provisionDockerMachine
	I0528 21:12:49.170937   40275 start.go:293] postStartSetup for "multinode-869191" (driver="kvm2")
	I0528 21:12:49.170951   40275 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:12:49.170978   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:12:49.171292   40275 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:12:49.171320   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:12:49.174553   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.175079   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:12:49.175113   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.175359   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:12:49.175561   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:12:49.175761   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:12:49.175943   40275 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/multinode-869191/id_rsa Username:docker}
	I0528 21:12:49.267168   40275 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:12:49.271583   40275 command_runner.go:130] > NAME=Buildroot
	I0528 21:12:49.271604   40275 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0528 21:12:49.271608   40275 command_runner.go:130] > ID=buildroot
	I0528 21:12:49.271614   40275 command_runner.go:130] > VERSION_ID=2023.02.9
	I0528 21:12:49.271618   40275 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0528 21:12:49.271657   40275 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 21:12:49.271670   40275 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:12:49.271723   40275 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:12:49.271803   40275 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:12:49.271814   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /etc/ssl/certs/117602.pem
	I0528 21:12:49.271890   40275 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:12:49.281736   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:12:49.306997   40275 start.go:296] duration metric: took 136.046323ms for postStartSetup
	I0528 21:12:49.307037   40275 fix.go:56] duration metric: took 1m31.846435548s for fixHost
	I0528 21:12:49.307057   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:12:49.309971   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.310367   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:12:49.310390   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.310519   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:12:49.310713   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:12:49.310859   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:12:49.310991   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:12:49.311174   40275 main.go:141] libmachine: Using SSH client type: native
	I0528 21:12:49.311400   40275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0528 21:12:49.311412   40275 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 21:12:49.422793   40275 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716930769.399574046
	
	I0528 21:12:49.422814   40275 fix.go:216] guest clock: 1716930769.399574046
	I0528 21:12:49.422833   40275 fix.go:229] Guest: 2024-05-28 21:12:49.399574046 +0000 UTC Remote: 2024-05-28 21:12:49.307041177 +0000 UTC m=+91.978147062 (delta=92.532869ms)
	I0528 21:12:49.422851   40275 fix.go:200] guest clock delta is within tolerance: 92.532869ms
	I0528 21:12:49.422857   40275 start.go:83] releasing machines lock for "multinode-869191", held for 1m31.962267719s
	I0528 21:12:49.422877   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:12:49.423138   40275 main.go:141] libmachine: (multinode-869191) Calling .GetIP
	I0528 21:12:49.425714   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.426138   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:12:49.426185   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.426282   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:12:49.426826   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:12:49.427036   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:12:49.427136   40275 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:12:49.427185   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:12:49.427249   40275 ssh_runner.go:195] Run: cat /version.json
	I0528 21:12:49.427289   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:12:49.429813   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.430194   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.430286   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:12:49.430330   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.430445   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:12:49.430801   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:12:49.430858   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:12:49.430931   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:12:49.430843   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.432195   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:12:49.432209   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:12:49.432412   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:12:49.432428   40275 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/multinode-869191/id_rsa Username:docker}
	I0528 21:12:49.432551   40275 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/multinode-869191/id_rsa Username:docker}
	I0528 21:12:49.540816   40275 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0528 21:12:49.540861   40275 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0528 21:12:49.540992   40275 ssh_runner.go:195] Run: systemctl --version
	I0528 21:12:49.546981   40275 command_runner.go:130] > systemd 252 (252)
	I0528 21:12:49.547021   40275 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0528 21:12:49.547304   40275 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:12:49.713825   40275 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 21:12:49.720038   40275 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0528 21:12:49.720306   40275 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:12:49.720382   40275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:12:49.729775   40275 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0528 21:12:49.729799   40275 start.go:494] detecting cgroup driver to use...
	I0528 21:12:49.729857   40275 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:12:49.745749   40275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:12:49.759251   40275 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:12:49.759306   40275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:12:49.772497   40275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:12:49.785592   40275 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:12:49.929155   40275 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:12:50.073127   40275 docker.go:233] disabling docker service ...
	I0528 21:12:50.073221   40275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:12:50.090853   40275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:12:50.104620   40275 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:12:50.244043   40275 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:12:50.389900   40275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:12:50.404235   40275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:12:50.423682   40275 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0528 21:12:50.424150   40275 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 21:12:50.424225   40275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.434681   40275 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:12:50.434735   40275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.445096   40275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.455436   40275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.465825   40275 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:12:50.476902   40275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.488300   40275 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.500561   40275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.512094   40275 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:12:50.522088   40275 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0528 21:12:50.522399   40275 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:12:50.532649   40275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:12:50.673783   40275 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:12:51.520421   40275 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:12:51.520490   40275 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:12:51.525769   40275 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0528 21:12:51.525796   40275 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0528 21:12:51.525805   40275 command_runner.go:130] > Device: 0,22	Inode: 1340        Links: 1
	I0528 21:12:51.525814   40275 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0528 21:12:51.525821   40275 command_runner.go:130] > Access: 2024-05-28 21:12:51.391295034 +0000
	I0528 21:12:51.525830   40275 command_runner.go:130] > Modify: 2024-05-28 21:12:51.391295034 +0000
	I0528 21:12:51.525838   40275 command_runner.go:130] > Change: 2024-05-28 21:12:51.391295034 +0000
	I0528 21:12:51.525847   40275 command_runner.go:130] >  Birth: -
	I0528 21:12:51.526100   40275 start.go:562] Will wait 60s for crictl version
	I0528 21:12:51.526154   40275 ssh_runner.go:195] Run: which crictl
	I0528 21:12:51.530069   40275 command_runner.go:130] > /usr/bin/crictl
	I0528 21:12:51.530183   40275 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:12:51.572037   40275 command_runner.go:130] > Version:  0.1.0
	I0528 21:12:51.572060   40275 command_runner.go:130] > RuntimeName:  cri-o
	I0528 21:12:51.572068   40275 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0528 21:12:51.572076   40275 command_runner.go:130] > RuntimeApiVersion:  v1
	I0528 21:12:51.572099   40275 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 21:12:51.572165   40275 ssh_runner.go:195] Run: crio --version
	I0528 21:12:51.604895   40275 command_runner.go:130] > crio version 1.29.1
	I0528 21:12:51.604921   40275 command_runner.go:130] > Version:        1.29.1
	I0528 21:12:51.604971   40275 command_runner.go:130] > GitCommit:      unknown
	I0528 21:12:51.604995   40275 command_runner.go:130] > GitCommitDate:  unknown
	I0528 21:12:51.605002   40275 command_runner.go:130] > GitTreeState:   clean
	I0528 21:12:51.605013   40275 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0528 21:12:51.605021   40275 command_runner.go:130] > GoVersion:      go1.21.6
	I0528 21:12:51.605026   40275 command_runner.go:130] > Compiler:       gc
	I0528 21:12:51.605032   40275 command_runner.go:130] > Platform:       linux/amd64
	I0528 21:12:51.605037   40275 command_runner.go:130] > Linkmode:       dynamic
	I0528 21:12:51.605046   40275 command_runner.go:130] > BuildTags:      
	I0528 21:12:51.605050   40275 command_runner.go:130] >   containers_image_ostree_stub
	I0528 21:12:51.605055   40275 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0528 21:12:51.605059   40275 command_runner.go:130] >   btrfs_noversion
	I0528 21:12:51.605064   40275 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0528 21:12:51.605072   40275 command_runner.go:130] >   libdm_no_deferred_remove
	I0528 21:12:51.605079   40275 command_runner.go:130] >   seccomp
	I0528 21:12:51.605093   40275 command_runner.go:130] > LDFlags:          unknown
	I0528 21:12:51.605100   40275 command_runner.go:130] > SeccompEnabled:   true
	I0528 21:12:51.605106   40275 command_runner.go:130] > AppArmorEnabled:  false
	I0528 21:12:51.605173   40275 ssh_runner.go:195] Run: crio --version
	I0528 21:12:51.637205   40275 command_runner.go:130] > crio version 1.29.1
	I0528 21:12:51.637237   40275 command_runner.go:130] > Version:        1.29.1
	I0528 21:12:51.637247   40275 command_runner.go:130] > GitCommit:      unknown
	I0528 21:12:51.637254   40275 command_runner.go:130] > GitCommitDate:  unknown
	I0528 21:12:51.637263   40275 command_runner.go:130] > GitTreeState:   clean
	I0528 21:12:51.637270   40275 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0528 21:12:51.637274   40275 command_runner.go:130] > GoVersion:      go1.21.6
	I0528 21:12:51.637278   40275 command_runner.go:130] > Compiler:       gc
	I0528 21:12:51.637282   40275 command_runner.go:130] > Platform:       linux/amd64
	I0528 21:12:51.637287   40275 command_runner.go:130] > Linkmode:       dynamic
	I0528 21:12:51.637291   40275 command_runner.go:130] > BuildTags:      
	I0528 21:12:51.637295   40275 command_runner.go:130] >   containers_image_ostree_stub
	I0528 21:12:51.637299   40275 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0528 21:12:51.637302   40275 command_runner.go:130] >   btrfs_noversion
	I0528 21:12:51.637306   40275 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0528 21:12:51.637310   40275 command_runner.go:130] >   libdm_no_deferred_remove
	I0528 21:12:51.637314   40275 command_runner.go:130] >   seccomp
	I0528 21:12:51.637319   40275 command_runner.go:130] > LDFlags:          unknown
	I0528 21:12:51.637329   40275 command_runner.go:130] > SeccompEnabled:   true
	I0528 21:12:51.637337   40275 command_runner.go:130] > AppArmorEnabled:  false
	I0528 21:12:51.640784   40275 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 21:12:51.642219   40275 main.go:141] libmachine: (multinode-869191) Calling .GetIP
	I0528 21:12:51.644755   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:51.645082   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:12:51.645109   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:51.645417   40275 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 21:12:51.650172   40275 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0528 21:12:51.650295   40275 kubeadm.go:877] updating cluster {Name:multinode-869191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-869191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:12:51.650432   40275 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:12:51.650493   40275 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:12:51.692412   40275 command_runner.go:130] > {
	I0528 21:12:51.692433   40275 command_runner.go:130] >   "images": [
	I0528 21:12:51.692439   40275 command_runner.go:130] >     {
	I0528 21:12:51.692450   40275 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0528 21:12:51.692456   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.692465   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0528 21:12:51.692471   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692476   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.692487   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0528 21:12:51.692496   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0528 21:12:51.692502   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692510   40275 command_runner.go:130] >       "size": "65291810",
	I0528 21:12:51.692519   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.692527   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.692539   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.692545   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.692551   40275 command_runner.go:130] >     },
	I0528 21:12:51.692557   40275 command_runner.go:130] >     {
	I0528 21:12:51.692568   40275 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0528 21:12:51.692579   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.692589   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0528 21:12:51.692595   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692602   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.692615   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0528 21:12:51.692631   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0528 21:12:51.692637   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692652   40275 command_runner.go:130] >       "size": "65908273",
	I0528 21:12:51.692662   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.692674   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.692684   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.692691   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.692698   40275 command_runner.go:130] >     },
	I0528 21:12:51.692703   40275 command_runner.go:130] >     {
	I0528 21:12:51.692715   40275 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0528 21:12:51.692725   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.692734   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0528 21:12:51.692743   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692750   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.692763   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0528 21:12:51.692778   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0528 21:12:51.692787   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692795   40275 command_runner.go:130] >       "size": "1363676",
	I0528 21:12:51.692803   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.692811   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.692820   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.692829   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.692838   40275 command_runner.go:130] >     },
	I0528 21:12:51.692844   40275 command_runner.go:130] >     {
	I0528 21:12:51.692858   40275 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0528 21:12:51.692868   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.692880   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0528 21:12:51.692889   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692896   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.692912   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0528 21:12:51.692937   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0528 21:12:51.692945   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692951   40275 command_runner.go:130] >       "size": "31470524",
	I0528 21:12:51.692957   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.692963   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.692970   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.692976   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.692980   40275 command_runner.go:130] >     },
	I0528 21:12:51.692994   40275 command_runner.go:130] >     {
	I0528 21:12:51.693008   40275 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0528 21:12:51.693018   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.693031   40275 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0528 21:12:51.693054   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693067   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.693080   40275 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0528 21:12:51.693096   40275 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0528 21:12:51.693104   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693112   40275 command_runner.go:130] >       "size": "61245718",
	I0528 21:12:51.693122   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.693131   40275 command_runner.go:130] >       "username": "nonroot",
	I0528 21:12:51.693141   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.693150   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.693156   40275 command_runner.go:130] >     },
	I0528 21:12:51.693164   40275 command_runner.go:130] >     {
	I0528 21:12:51.693175   40275 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0528 21:12:51.693185   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.693193   40275 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0528 21:12:51.693202   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693210   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.693232   40275 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0528 21:12:51.693247   40275 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0528 21:12:51.693255   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693263   40275 command_runner.go:130] >       "size": "150779692",
	I0528 21:12:51.693272   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.693279   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.693288   40275 command_runner.go:130] >       },
	I0528 21:12:51.693296   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.693305   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.693313   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.693322   40275 command_runner.go:130] >     },
	I0528 21:12:51.693328   40275 command_runner.go:130] >     {
	I0528 21:12:51.693341   40275 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0528 21:12:51.693351   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.693363   40275 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0528 21:12:51.693378   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693388   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.693401   40275 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0528 21:12:51.693416   40275 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0528 21:12:51.693426   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693437   40275 command_runner.go:130] >       "size": "117601759",
	I0528 21:12:51.693444   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.693450   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.693459   40275 command_runner.go:130] >       },
	I0528 21:12:51.693466   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.693476   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.693484   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.693492   40275 command_runner.go:130] >     },
	I0528 21:12:51.693499   40275 command_runner.go:130] >     {
	I0528 21:12:51.693512   40275 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0528 21:12:51.693520   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.693530   40275 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0528 21:12:51.693539   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693546   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.693600   40275 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0528 21:12:51.693617   40275 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0528 21:12:51.693623   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693632   40275 command_runner.go:130] >       "size": "112170310",
	I0528 21:12:51.693641   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.693648   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.693657   40275 command_runner.go:130] >       },
	I0528 21:12:51.693664   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.693670   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.693675   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.693679   40275 command_runner.go:130] >     },
	I0528 21:12:51.693684   40275 command_runner.go:130] >     {
	I0528 21:12:51.693695   40275 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0528 21:12:51.693702   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.693711   40275 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0528 21:12:51.693717   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693727   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.693750   40275 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0528 21:12:51.693780   40275 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0528 21:12:51.693790   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693798   40275 command_runner.go:130] >       "size": "85933465",
	I0528 21:12:51.693825   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.693836   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.693844   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.693853   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.693859   40275 command_runner.go:130] >     },
	I0528 21:12:51.693868   40275 command_runner.go:130] >     {
	I0528 21:12:51.693882   40275 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0528 21:12:51.693892   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.693902   40275 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0528 21:12:51.693911   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693919   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.693935   40275 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0528 21:12:51.693951   40275 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0528 21:12:51.693959   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693967   40275 command_runner.go:130] >       "size": "63026504",
	I0528 21:12:51.693976   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.693984   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.693992   40275 command_runner.go:130] >       },
	I0528 21:12:51.693999   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.694008   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.694015   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.694024   40275 command_runner.go:130] >     },
	I0528 21:12:51.694032   40275 command_runner.go:130] >     {
	I0528 21:12:51.694044   40275 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0528 21:12:51.694053   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.694061   40275 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0528 21:12:51.694070   40275 command_runner.go:130] >       ],
	I0528 21:12:51.694076   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.694089   40275 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0528 21:12:51.694104   40275 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0528 21:12:51.694113   40275 command_runner.go:130] >       ],
	I0528 21:12:51.694123   40275 command_runner.go:130] >       "size": "750414",
	I0528 21:12:51.694139   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.694150   40275 command_runner.go:130] >         "value": "65535"
	I0528 21:12:51.694159   40275 command_runner.go:130] >       },
	I0528 21:12:51.694166   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.694175   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.694181   40275 command_runner.go:130] >       "pinned": true
	I0528 21:12:51.694187   40275 command_runner.go:130] >     }
	I0528 21:12:51.694193   40275 command_runner.go:130] >   ]
	I0528 21:12:51.694198   40275 command_runner.go:130] > }
	I0528 21:12:51.694388   40275 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:12:51.694400   40275 crio.go:433] Images already preloaded, skipping extraction
	I0528 21:12:51.694453   40275 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:12:51.729165   40275 command_runner.go:130] > {
	I0528 21:12:51.729188   40275 command_runner.go:130] >   "images": [
	I0528 21:12:51.729195   40275 command_runner.go:130] >     {
	I0528 21:12:51.729205   40275 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0528 21:12:51.729211   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.729219   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0528 21:12:51.729231   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729237   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.729251   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0528 21:12:51.729265   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0528 21:12:51.729271   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729282   40275 command_runner.go:130] >       "size": "65291810",
	I0528 21:12:51.729292   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.729299   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.729308   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.729316   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.729325   40275 command_runner.go:130] >     },
	I0528 21:12:51.729331   40275 command_runner.go:130] >     {
	I0528 21:12:51.729345   40275 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0528 21:12:51.729352   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.729361   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0528 21:12:51.729368   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729376   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.729389   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0528 21:12:51.729404   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0528 21:12:51.729413   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729424   40275 command_runner.go:130] >       "size": "65908273",
	I0528 21:12:51.729432   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.729445   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.729454   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.729461   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.729470   40275 command_runner.go:130] >     },
	I0528 21:12:51.729476   40275 command_runner.go:130] >     {
	I0528 21:12:51.729489   40275 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0528 21:12:51.729499   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.729509   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0528 21:12:51.729519   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729528   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.729544   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0528 21:12:51.729559   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0528 21:12:51.729568   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729575   40275 command_runner.go:130] >       "size": "1363676",
	I0528 21:12:51.729584   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.729590   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.729598   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.729608   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.729615   40275 command_runner.go:130] >     },
	I0528 21:12:51.729626   40275 command_runner.go:130] >     {
	I0528 21:12:51.729637   40275 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0528 21:12:51.729646   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.729655   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0528 21:12:51.729664   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729671   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.729687   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0528 21:12:51.729707   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0528 21:12:51.729716   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729724   40275 command_runner.go:130] >       "size": "31470524",
	I0528 21:12:51.729733   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.729740   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.729749   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.729756   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.729780   40275 command_runner.go:130] >     },
	I0528 21:12:51.729787   40275 command_runner.go:130] >     {
	I0528 21:12:51.729801   40275 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0528 21:12:51.729811   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.729821   40275 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0528 21:12:51.729830   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729838   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.729854   40275 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0528 21:12:51.729869   40275 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0528 21:12:51.729879   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729886   40275 command_runner.go:130] >       "size": "61245718",
	I0528 21:12:51.729896   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.729907   40275 command_runner.go:130] >       "username": "nonroot",
	I0528 21:12:51.729914   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.729922   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.729928   40275 command_runner.go:130] >     },
	I0528 21:12:51.729937   40275 command_runner.go:130] >     {
	I0528 21:12:51.729947   40275 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0528 21:12:51.729957   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.729966   40275 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0528 21:12:51.729974   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729981   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.729997   40275 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0528 21:12:51.730012   40275 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0528 21:12:51.730021   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730029   40275 command_runner.go:130] >       "size": "150779692",
	I0528 21:12:51.730038   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.730046   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.730055   40275 command_runner.go:130] >       },
	I0528 21:12:51.730062   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.730074   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.730084   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.730094   40275 command_runner.go:130] >     },
	I0528 21:12:51.730100   40275 command_runner.go:130] >     {
	I0528 21:12:51.730111   40275 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0528 21:12:51.730120   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.730129   40275 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0528 21:12:51.730138   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730146   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.730162   40275 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0528 21:12:51.730178   40275 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0528 21:12:51.730187   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730196   40275 command_runner.go:130] >       "size": "117601759",
	I0528 21:12:51.730206   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.730213   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.730222   40275 command_runner.go:130] >       },
	I0528 21:12:51.730240   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.730250   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.730257   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.730267   40275 command_runner.go:130] >     },
	I0528 21:12:51.730275   40275 command_runner.go:130] >     {
	I0528 21:12:51.730289   40275 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0528 21:12:51.730297   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.730309   40275 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0528 21:12:51.730318   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730326   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.730348   40275 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0528 21:12:51.730363   40275 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0528 21:12:51.730373   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730380   40275 command_runner.go:130] >       "size": "112170310",
	I0528 21:12:51.730389   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.730397   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.730405   40275 command_runner.go:130] >       },
	I0528 21:12:51.730413   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.730423   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.730432   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.730438   40275 command_runner.go:130] >     },
	I0528 21:12:51.730445   40275 command_runner.go:130] >     {
	I0528 21:12:51.730458   40275 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0528 21:12:51.730466   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.730478   40275 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0528 21:12:51.730486   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730494   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.730509   40275 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0528 21:12:51.730525   40275 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0528 21:12:51.730534   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730541   40275 command_runner.go:130] >       "size": "85933465",
	I0528 21:12:51.730548   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.730559   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.730568   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.730578   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.730587   40275 command_runner.go:130] >     },
	I0528 21:12:51.730593   40275 command_runner.go:130] >     {
	I0528 21:12:51.730607   40275 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0528 21:12:51.730616   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.730628   40275 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0528 21:12:51.730637   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730645   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.730661   40275 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0528 21:12:51.730676   40275 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0528 21:12:51.730685   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730693   40275 command_runner.go:130] >       "size": "63026504",
	I0528 21:12:51.730703   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.730711   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.730722   40275 command_runner.go:130] >       },
	I0528 21:12:51.730732   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.730739   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.730749   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.730757   40275 command_runner.go:130] >     },
	I0528 21:12:51.730765   40275 command_runner.go:130] >     {
	I0528 21:12:51.730777   40275 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0528 21:12:51.730786   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.730795   40275 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0528 21:12:51.730804   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730811   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.730827   40275 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0528 21:12:51.730842   40275 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0528 21:12:51.730851   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730858   40275 command_runner.go:130] >       "size": "750414",
	I0528 21:12:51.730869   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.730879   40275 command_runner.go:130] >         "value": "65535"
	I0528 21:12:51.730885   40275 command_runner.go:130] >       },
	I0528 21:12:51.730895   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.730904   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.730911   40275 command_runner.go:130] >       "pinned": true
	I0528 21:12:51.730920   40275 command_runner.go:130] >     }
	I0528 21:12:51.730926   40275 command_runner.go:130] >   ]
	I0528 21:12:51.730934   40275 command_runner.go:130] > }
	I0528 21:12:51.731055   40275 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:12:51.731068   40275 cache_images.go:84] Images are preloaded, skipping loading
	I0528 21:12:51.731077   40275 kubeadm.go:928] updating node { 192.168.39.65 8443 v1.30.1 crio true true} ...
	I0528 21:12:51.731185   40275 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-869191 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-869191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:12:51.731269   40275 ssh_runner.go:195] Run: crio config
	I0528 21:12:51.780871   40275 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0528 21:12:51.780901   40275 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0528 21:12:51.780912   40275 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0528 21:12:51.780917   40275 command_runner.go:130] > #
	I0528 21:12:51.780929   40275 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0528 21:12:51.780936   40275 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0528 21:12:51.780944   40275 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0528 21:12:51.780955   40275 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0528 21:12:51.780965   40275 command_runner.go:130] > # reload'.
	I0528 21:12:51.780985   40275 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0528 21:12:51.780998   40275 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0528 21:12:51.781015   40275 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0528 21:12:51.781022   40275 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0528 21:12:51.781028   40275 command_runner.go:130] > [crio]
	I0528 21:12:51.781036   40275 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0528 21:12:51.781045   40275 command_runner.go:130] > # containers images, in this directory.
	I0528 21:12:51.781054   40275 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0528 21:12:51.781077   40275 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0528 21:12:51.781141   40275 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0528 21:12:51.781167   40275 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0528 21:12:51.781380   40275 command_runner.go:130] > # imagestore = ""
	I0528 21:12:51.781394   40275 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0528 21:12:51.781401   40275 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0528 21:12:51.781562   40275 command_runner.go:130] > storage_driver = "overlay"
	I0528 21:12:51.781579   40275 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0528 21:12:51.781588   40275 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0528 21:12:51.781597   40275 command_runner.go:130] > storage_option = [
	I0528 21:12:51.781771   40275 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0528 21:12:51.781812   40275 command_runner.go:130] > ]
	I0528 21:12:51.781831   40275 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0528 21:12:51.781845   40275 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0528 21:12:51.782191   40275 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0528 21:12:51.782213   40275 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0528 21:12:51.782222   40275 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0528 21:12:51.782230   40275 command_runner.go:130] > # always happen on a node reboot
	I0528 21:12:51.782486   40275 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0528 21:12:51.782511   40275 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0528 21:12:51.782523   40275 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0528 21:12:51.782535   40275 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0528 21:12:51.782607   40275 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0528 21:12:51.782630   40275 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0528 21:12:51.782645   40275 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0528 21:12:51.782883   40275 command_runner.go:130] > # internal_wipe = true
	I0528 21:12:51.782897   40275 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0528 21:12:51.782903   40275 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0528 21:12:51.783237   40275 command_runner.go:130] > # internal_repair = false
	I0528 21:12:51.783249   40275 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0528 21:12:51.783257   40275 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0528 21:12:51.783266   40275 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0528 21:12:51.783505   40275 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0528 21:12:51.783515   40275 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0528 21:12:51.783519   40275 command_runner.go:130] > [crio.api]
	I0528 21:12:51.783524   40275 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0528 21:12:51.783740   40275 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0528 21:12:51.783754   40275 command_runner.go:130] > # IP address on which the stream server will listen.
	I0528 21:12:51.783990   40275 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0528 21:12:51.784016   40275 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0528 21:12:51.784025   40275 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0528 21:12:51.784238   40275 command_runner.go:130] > # stream_port = "0"
	I0528 21:12:51.784253   40275 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0528 21:12:51.784676   40275 command_runner.go:130] > # stream_enable_tls = false
	I0528 21:12:51.784694   40275 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0528 21:12:51.784846   40275 command_runner.go:130] > # stream_idle_timeout = ""
	I0528 21:12:51.784860   40275 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0528 21:12:51.784870   40275 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0528 21:12:51.784876   40275 command_runner.go:130] > # minutes.
	I0528 21:12:51.785145   40275 command_runner.go:130] > # stream_tls_cert = ""
	I0528 21:12:51.785161   40275 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0528 21:12:51.785170   40275 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0528 21:12:51.785342   40275 command_runner.go:130] > # stream_tls_key = ""
	I0528 21:12:51.785355   40275 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0528 21:12:51.785367   40275 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0528 21:12:51.785417   40275 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0528 21:12:51.785596   40275 command_runner.go:130] > # stream_tls_ca = ""
	I0528 21:12:51.785607   40275 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0528 21:12:51.785640   40275 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0528 21:12:51.785657   40275 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0528 21:12:51.785782   40275 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0528 21:12:51.785798   40275 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0528 21:12:51.785807   40275 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0528 21:12:51.785817   40275 command_runner.go:130] > [crio.runtime]
	I0528 21:12:51.785825   40275 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0528 21:12:51.785834   40275 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0528 21:12:51.785840   40275 command_runner.go:130] > # "nofile=1024:2048"
	I0528 21:12:51.785849   40275 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0528 21:12:51.785952   40275 command_runner.go:130] > # default_ulimits = [
	I0528 21:12:51.786267   40275 command_runner.go:130] > # ]
	I0528 21:12:51.786284   40275 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0528 21:12:51.786322   40275 command_runner.go:130] > # no_pivot = false
	I0528 21:12:51.786337   40275 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0528 21:12:51.786348   40275 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0528 21:12:51.786444   40275 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0528 21:12:51.786457   40275 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0528 21:12:51.786465   40275 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0528 21:12:51.786483   40275 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0528 21:12:51.786614   40275 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0528 21:12:51.786630   40275 command_runner.go:130] > # Cgroup setting for conmon
	I0528 21:12:51.786641   40275 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0528 21:12:51.786800   40275 command_runner.go:130] > conmon_cgroup = "pod"
	I0528 21:12:51.786816   40275 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0528 21:12:51.786825   40275 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0528 21:12:51.786835   40275 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0528 21:12:51.786844   40275 command_runner.go:130] > conmon_env = [
	I0528 21:12:51.786969   40275 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0528 21:12:51.786998   40275 command_runner.go:130] > ]
	I0528 21:12:51.787011   40275 command_runner.go:130] > # Additional environment variables to set for all the
	I0528 21:12:51.787022   40275 command_runner.go:130] > # containers. These are overridden if set in the
	I0528 21:12:51.787034   40275 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0528 21:12:51.787281   40275 command_runner.go:130] > # default_env = [
	I0528 21:12:51.787413   40275 command_runner.go:130] > # ]
	I0528 21:12:51.787427   40275 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0528 21:12:51.787440   40275 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0528 21:12:51.787703   40275 command_runner.go:130] > # selinux = false
	I0528 21:12:51.787721   40275 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0528 21:12:51.787732   40275 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0528 21:12:51.787741   40275 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0528 21:12:51.788027   40275 command_runner.go:130] > # seccomp_profile = ""
	I0528 21:12:51.788042   40275 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0528 21:12:51.788052   40275 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0528 21:12:51.788062   40275 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0528 21:12:51.788070   40275 command_runner.go:130] > # which might increase security.
	I0528 21:12:51.788081   40275 command_runner.go:130] > # This option is currently deprecated,
	I0528 21:12:51.788095   40275 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0528 21:12:51.788135   40275 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0528 21:12:51.788154   40275 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0528 21:12:51.788165   40275 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0528 21:12:51.788178   40275 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0528 21:12:51.788187   40275 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0528 21:12:51.788199   40275 command_runner.go:130] > # This option supports live configuration reload.
	I0528 21:12:51.788517   40275 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0528 21:12:51.788535   40275 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0528 21:12:51.788543   40275 command_runner.go:130] > # the cgroup blockio controller.
	I0528 21:12:51.789974   40275 command_runner.go:130] > # blockio_config_file = ""
	I0528 21:12:51.789994   40275 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0528 21:12:51.790000   40275 command_runner.go:130] > # blockio parameters.
	I0528 21:12:51.790006   40275 command_runner.go:130] > # blockio_reload = false
	I0528 21:12:51.790018   40275 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0528 21:12:51.790026   40275 command_runner.go:130] > # irqbalance daemon.
	I0528 21:12:51.790035   40275 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0528 21:12:51.790045   40275 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0528 21:12:51.790062   40275 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0528 21:12:51.790074   40275 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0528 21:12:51.790084   40275 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0528 21:12:51.790095   40275 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0528 21:12:51.790106   40275 command_runner.go:130] > # This option supports live configuration reload.
	I0528 21:12:51.790113   40275 command_runner.go:130] > # rdt_config_file = ""
	I0528 21:12:51.790125   40275 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0528 21:12:51.790131   40275 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0528 21:12:51.790165   40275 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0528 21:12:51.790172   40275 command_runner.go:130] > # separate_pull_cgroup = ""
	I0528 21:12:51.790181   40275 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0528 21:12:51.790191   40275 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0528 21:12:51.790196   40275 command_runner.go:130] > # will be added.
	I0528 21:12:51.790202   40275 command_runner.go:130] > # default_capabilities = [
	I0528 21:12:51.790207   40275 command_runner.go:130] > # 	"CHOWN",
	I0528 21:12:51.790215   40275 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0528 21:12:51.790220   40275 command_runner.go:130] > # 	"FSETID",
	I0528 21:12:51.790226   40275 command_runner.go:130] > # 	"FOWNER",
	I0528 21:12:51.790239   40275 command_runner.go:130] > # 	"SETGID",
	I0528 21:12:51.790245   40275 command_runner.go:130] > # 	"SETUID",
	I0528 21:12:51.790250   40275 command_runner.go:130] > # 	"SETPCAP",
	I0528 21:12:51.790258   40275 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0528 21:12:51.790264   40275 command_runner.go:130] > # 	"KILL",
	I0528 21:12:51.790270   40275 command_runner.go:130] > # ]
	I0528 21:12:51.790282   40275 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0528 21:12:51.790295   40275 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0528 21:12:51.790305   40275 command_runner.go:130] > # add_inheritable_capabilities = false
	I0528 21:12:51.790315   40275 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0528 21:12:51.790321   40275 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0528 21:12:51.790325   40275 command_runner.go:130] > default_sysctls = [
	I0528 21:12:51.790330   40275 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0528 21:12:51.790336   40275 command_runner.go:130] > ]
	I0528 21:12:51.790341   40275 command_runner.go:130] > # List of devices on the host that a
	I0528 21:12:51.790347   40275 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0528 21:12:51.790353   40275 command_runner.go:130] > # allowed_devices = [
	I0528 21:12:51.790357   40275 command_runner.go:130] > # 	"/dev/fuse",
	I0528 21:12:51.790360   40275 command_runner.go:130] > # ]
	I0528 21:12:51.790365   40275 command_runner.go:130] > # List of additional devices. specified as
	I0528 21:12:51.790373   40275 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0528 21:12:51.790382   40275 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0528 21:12:51.790388   40275 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0528 21:12:51.790394   40275 command_runner.go:130] > # additional_devices = [
	I0528 21:12:51.790397   40275 command_runner.go:130] > # ]
	I0528 21:12:51.790402   40275 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0528 21:12:51.790408   40275 command_runner.go:130] > # cdi_spec_dirs = [
	I0528 21:12:51.790413   40275 command_runner.go:130] > # 	"/etc/cdi",
	I0528 21:12:51.790417   40275 command_runner.go:130] > # 	"/var/run/cdi",
	I0528 21:12:51.790422   40275 command_runner.go:130] > # ]
	I0528 21:12:51.790429   40275 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0528 21:12:51.790437   40275 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0528 21:12:51.790441   40275 command_runner.go:130] > # Defaults to false.
	I0528 21:12:51.790445   40275 command_runner.go:130] > # device_ownership_from_security_context = false
	I0528 21:12:51.790454   40275 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0528 21:12:51.790459   40275 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0528 21:12:51.790465   40275 command_runner.go:130] > # hooks_dir = [
	I0528 21:12:51.790470   40275 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0528 21:12:51.790475   40275 command_runner.go:130] > # ]
	I0528 21:12:51.790481   40275 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0528 21:12:51.790489   40275 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0528 21:12:51.790494   40275 command_runner.go:130] > # its default mounts from the following two files:
	I0528 21:12:51.790499   40275 command_runner.go:130] > #
	I0528 21:12:51.790504   40275 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0528 21:12:51.790512   40275 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0528 21:12:51.790517   40275 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0528 21:12:51.790523   40275 command_runner.go:130] > #
	I0528 21:12:51.790528   40275 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0528 21:12:51.790534   40275 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0528 21:12:51.790559   40275 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0528 21:12:51.790570   40275 command_runner.go:130] > #      only add mounts it finds in this file.
	I0528 21:12:51.790573   40275 command_runner.go:130] > #
	I0528 21:12:51.790577   40275 command_runner.go:130] > # default_mounts_file = ""
	I0528 21:12:51.790582   40275 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0528 21:12:51.790592   40275 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0528 21:12:51.790598   40275 command_runner.go:130] > pids_limit = 1024
	I0528 21:12:51.790605   40275 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0528 21:12:51.790613   40275 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0528 21:12:51.790620   40275 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0528 21:12:51.790636   40275 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0528 21:12:51.790642   40275 command_runner.go:130] > # log_size_max = -1
	I0528 21:12:51.790649   40275 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0528 21:12:51.790653   40275 command_runner.go:130] > # log_to_journald = false
	I0528 21:12:51.790659   40275 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0528 21:12:51.790663   40275 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0528 21:12:51.790668   40275 command_runner.go:130] > # Path to directory for container attach sockets.
	I0528 21:12:51.790673   40275 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0528 21:12:51.790678   40275 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0528 21:12:51.790683   40275 command_runner.go:130] > # bind_mount_prefix = ""
	I0528 21:12:51.790688   40275 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0528 21:12:51.790693   40275 command_runner.go:130] > # read_only = false
	I0528 21:12:51.790698   40275 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0528 21:12:51.790704   40275 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0528 21:12:51.790710   40275 command_runner.go:130] > # live configuration reload.
	I0528 21:12:51.790714   40275 command_runner.go:130] > # log_level = "info"
	I0528 21:12:51.790720   40275 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0528 21:12:51.790726   40275 command_runner.go:130] > # This option supports live configuration reload.
	I0528 21:12:51.790729   40275 command_runner.go:130] > # log_filter = ""
	I0528 21:12:51.790735   40275 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0528 21:12:51.790741   40275 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0528 21:12:51.790746   40275 command_runner.go:130] > # separated by comma.
	I0528 21:12:51.790753   40275 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0528 21:12:51.790760   40275 command_runner.go:130] > # uid_mappings = ""
	I0528 21:12:51.790765   40275 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0528 21:12:51.790771   40275 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0528 21:12:51.790775   40275 command_runner.go:130] > # separated by comma.
	I0528 21:12:51.790787   40275 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0528 21:12:51.790794   40275 command_runner.go:130] > # gid_mappings = ""
	I0528 21:12:51.790800   40275 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0528 21:12:51.790809   40275 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0528 21:12:51.790815   40275 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0528 21:12:51.790825   40275 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0528 21:12:51.790829   40275 command_runner.go:130] > # minimum_mappable_uid = -1
	I0528 21:12:51.790835   40275 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0528 21:12:51.790842   40275 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0528 21:12:51.790848   40275 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0528 21:12:51.790857   40275 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0528 21:12:51.790861   40275 command_runner.go:130] > # minimum_mappable_gid = -1
	I0528 21:12:51.790866   40275 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0528 21:12:51.790874   40275 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0528 21:12:51.790879   40275 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0528 21:12:51.790885   40275 command_runner.go:130] > # ctr_stop_timeout = 30
	I0528 21:12:51.790891   40275 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0528 21:12:51.790896   40275 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0528 21:12:51.790900   40275 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0528 21:12:51.790905   40275 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0528 21:12:51.790909   40275 command_runner.go:130] > drop_infra_ctr = false
	I0528 21:12:51.790914   40275 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0528 21:12:51.790919   40275 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0528 21:12:51.790926   40275 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0528 21:12:51.790929   40275 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0528 21:12:51.790935   40275 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0528 21:12:51.790941   40275 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0528 21:12:51.790948   40275 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0528 21:12:51.790953   40275 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0528 21:12:51.790957   40275 command_runner.go:130] > # shared_cpuset = ""
	I0528 21:12:51.790962   40275 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0528 21:12:51.790967   40275 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0528 21:12:51.790973   40275 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0528 21:12:51.790980   40275 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0528 21:12:51.790987   40275 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0528 21:12:51.790993   40275 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0528 21:12:51.791000   40275 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0528 21:12:51.791006   40275 command_runner.go:130] > # enable_criu_support = false
	I0528 21:12:51.791011   40275 command_runner.go:130] > # Enable/disable the generation of the container,
	I0528 21:12:51.791018   40275 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0528 21:12:51.791022   40275 command_runner.go:130] > # enable_pod_events = false
	I0528 21:12:51.791028   40275 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0528 21:12:51.791034   40275 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0528 21:12:51.791039   40275 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0528 21:12:51.791045   40275 command_runner.go:130] > # default_runtime = "runc"
	I0528 21:12:51.791051   40275 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0528 21:12:51.791062   40275 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0528 21:12:51.791075   40275 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0528 21:12:51.791087   40275 command_runner.go:130] > # creation as a file is not desired either.
	I0528 21:12:51.791102   40275 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0528 21:12:51.791116   40275 command_runner.go:130] > # the hostname is being managed dynamically.
	I0528 21:12:51.791126   40275 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0528 21:12:51.791132   40275 command_runner.go:130] > # ]
	I0528 21:12:51.791145   40275 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0528 21:12:51.791157   40275 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0528 21:12:51.791170   40275 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0528 21:12:51.791181   40275 command_runner.go:130] > # Each entry in the table should follow the format:
	I0528 21:12:51.791189   40275 command_runner.go:130] > #
	I0528 21:12:51.791197   40275 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0528 21:12:51.791205   40275 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0528 21:12:51.791233   40275 command_runner.go:130] > # runtime_type = "oci"
	I0528 21:12:51.791244   40275 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0528 21:12:51.791251   40275 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0528 21:12:51.791261   40275 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0528 21:12:51.791271   40275 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0528 21:12:51.791277   40275 command_runner.go:130] > # monitor_env = []
	I0528 21:12:51.791286   40275 command_runner.go:130] > # privileged_without_host_devices = false
	I0528 21:12:51.791295   40275 command_runner.go:130] > # allowed_annotations = []
	I0528 21:12:51.791305   40275 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0528 21:12:51.791314   40275 command_runner.go:130] > # Where:
	I0528 21:12:51.791322   40275 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0528 21:12:51.791335   40275 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0528 21:12:51.791347   40275 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0528 21:12:51.791358   40275 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0528 21:12:51.791364   40275 command_runner.go:130] > #   in $PATH.
	I0528 21:12:51.791376   40275 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0528 21:12:51.791387   40275 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0528 21:12:51.791397   40275 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0528 21:12:51.791407   40275 command_runner.go:130] > #   state.
	I0528 21:12:51.791424   40275 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0528 21:12:51.791431   40275 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0528 21:12:51.791437   40275 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0528 21:12:51.791443   40275 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0528 21:12:51.791448   40275 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0528 21:12:51.791454   40275 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0528 21:12:51.791459   40275 command_runner.go:130] > #   The currently recognized values are:
	I0528 21:12:51.791465   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0528 21:12:51.791478   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0528 21:12:51.791484   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0528 21:12:51.791492   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0528 21:12:51.791499   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0528 21:12:51.791505   40275 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0528 21:12:51.791514   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0528 21:12:51.791520   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0528 21:12:51.791529   40275 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0528 21:12:51.791535   40275 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0528 21:12:51.791541   40275 command_runner.go:130] > #   deprecated option "conmon".
	I0528 21:12:51.791548   40275 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0528 21:12:51.791555   40275 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0528 21:12:51.791561   40275 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0528 21:12:51.791569   40275 command_runner.go:130] > #   should be moved to the container's cgroup
	I0528 21:12:51.791575   40275 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0528 21:12:51.791581   40275 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0528 21:12:51.791592   40275 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0528 21:12:51.791604   40275 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0528 21:12:51.791612   40275 command_runner.go:130] > #
	I0528 21:12:51.791623   40275 command_runner.go:130] > # Using the seccomp notifier feature:
	I0528 21:12:51.791630   40275 command_runner.go:130] > #
	I0528 21:12:51.791641   40275 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0528 21:12:51.791654   40275 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0528 21:12:51.791660   40275 command_runner.go:130] > #
	I0528 21:12:51.791670   40275 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0528 21:12:51.791683   40275 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0528 21:12:51.791688   40275 command_runner.go:130] > #
	I0528 21:12:51.791695   40275 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0528 21:12:51.791702   40275 command_runner.go:130] > # feature.
	I0528 21:12:51.791710   40275 command_runner.go:130] > #
	I0528 21:12:51.791718   40275 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0528 21:12:51.791727   40275 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0528 21:12:51.791733   40275 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0528 21:12:51.791742   40275 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0528 21:12:51.791748   40275 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0528 21:12:51.791753   40275 command_runner.go:130] > #
	I0528 21:12:51.791759   40275 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0528 21:12:51.791767   40275 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0528 21:12:51.791771   40275 command_runner.go:130] > #
	I0528 21:12:51.791776   40275 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0528 21:12:51.791785   40275 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0528 21:12:51.791790   40275 command_runner.go:130] > #
	I0528 21:12:51.791796   40275 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0528 21:12:51.791804   40275 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0528 21:12:51.791808   40275 command_runner.go:130] > # limitation.
	I0528 21:12:51.791820   40275 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0528 21:12:51.791827   40275 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0528 21:12:51.791842   40275 command_runner.go:130] > runtime_type = "oci"
	I0528 21:12:51.791851   40275 command_runner.go:130] > runtime_root = "/run/runc"
	I0528 21:12:51.791856   40275 command_runner.go:130] > runtime_config_path = ""
	I0528 21:12:51.791861   40275 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0528 21:12:51.791867   40275 command_runner.go:130] > monitor_cgroup = "pod"
	I0528 21:12:51.791871   40275 command_runner.go:130] > monitor_exec_cgroup = ""
	I0528 21:12:51.791875   40275 command_runner.go:130] > monitor_env = [
	I0528 21:12:51.791881   40275 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0528 21:12:51.791887   40275 command_runner.go:130] > ]
	I0528 21:12:51.791892   40275 command_runner.go:130] > privileged_without_host_devices = false
	I0528 21:12:51.791902   40275 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0528 21:12:51.791909   40275 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0528 21:12:51.791915   40275 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0528 21:12:51.791923   40275 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0528 21:12:51.791933   40275 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0528 21:12:51.791939   40275 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0528 21:12:51.791950   40275 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0528 21:12:51.791960   40275 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0528 21:12:51.791966   40275 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0528 21:12:51.791975   40275 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0528 21:12:51.791979   40275 command_runner.go:130] > # Example:
	I0528 21:12:51.791985   40275 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0528 21:12:51.791989   40275 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0528 21:12:51.791997   40275 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0528 21:12:51.792002   40275 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0528 21:12:51.792005   40275 command_runner.go:130] > # cpuset = 0
	I0528 21:12:51.792009   40275 command_runner.go:130] > # cpushares = "0-1"
	I0528 21:12:51.792012   40275 command_runner.go:130] > # Where:
	I0528 21:12:51.792016   40275 command_runner.go:130] > # The workload name is workload-type.
	I0528 21:12:51.792023   40275 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0528 21:12:51.792028   40275 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0528 21:12:51.792034   40275 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0528 21:12:51.792041   40275 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0528 21:12:51.792046   40275 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0528 21:12:51.792050   40275 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0528 21:12:51.792056   40275 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0528 21:12:51.792059   40275 command_runner.go:130] > # Default value is set to true
	I0528 21:12:51.792063   40275 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0528 21:12:51.792068   40275 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0528 21:12:51.792074   40275 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0528 21:12:51.792081   40275 command_runner.go:130] > # Default value is set to 'false'
	I0528 21:12:51.792087   40275 command_runner.go:130] > # disable_hostport_mapping = false
	I0528 21:12:51.792097   40275 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0528 21:12:51.792102   40275 command_runner.go:130] > #
	I0528 21:12:51.792110   40275 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0528 21:12:51.792119   40275 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0528 21:12:51.792131   40275 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0528 21:12:51.792142   40275 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0528 21:12:51.792151   40275 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0528 21:12:51.792155   40275 command_runner.go:130] > [crio.image]
	I0528 21:12:51.792165   40275 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0528 21:12:51.792171   40275 command_runner.go:130] > # default_transport = "docker://"
	I0528 21:12:51.792180   40275 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0528 21:12:51.792186   40275 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0528 21:12:51.792190   40275 command_runner.go:130] > # global_auth_file = ""
	I0528 21:12:51.792195   40275 command_runner.go:130] > # The image used to instantiate infra containers.
	I0528 21:12:51.792200   40275 command_runner.go:130] > # This option supports live configuration reload.
	I0528 21:12:51.792204   40275 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0528 21:12:51.792210   40275 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0528 21:12:51.792218   40275 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0528 21:12:51.792222   40275 command_runner.go:130] > # This option supports live configuration reload.
	I0528 21:12:51.792233   40275 command_runner.go:130] > # pause_image_auth_file = ""
	I0528 21:12:51.792240   40275 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0528 21:12:51.792248   40275 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0528 21:12:51.792254   40275 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0528 21:12:51.792264   40275 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0528 21:12:51.792269   40275 command_runner.go:130] > # pause_command = "/pause"
	I0528 21:12:51.792275   40275 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0528 21:12:51.792281   40275 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0528 21:12:51.792289   40275 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0528 21:12:51.792295   40275 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0528 21:12:51.792302   40275 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0528 21:12:51.792307   40275 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0528 21:12:51.792313   40275 command_runner.go:130] > # pinned_images = [
	I0528 21:12:51.792317   40275 command_runner.go:130] > # ]
	I0528 21:12:51.792325   40275 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0528 21:12:51.792330   40275 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0528 21:12:51.792336   40275 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0528 21:12:51.792342   40275 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0528 21:12:51.792347   40275 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0528 21:12:51.792351   40275 command_runner.go:130] > # signature_policy = ""
	I0528 21:12:51.792358   40275 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0528 21:12:51.792365   40275 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0528 21:12:51.792373   40275 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0528 21:12:51.792379   40275 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0528 21:12:51.792387   40275 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0528 21:12:51.792392   40275 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0528 21:12:51.792400   40275 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0528 21:12:51.792405   40275 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0528 21:12:51.792412   40275 command_runner.go:130] > # changing them here.
	I0528 21:12:51.792416   40275 command_runner.go:130] > # insecure_registries = [
	I0528 21:12:51.792418   40275 command_runner.go:130] > # ]
	I0528 21:12:51.792424   40275 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0528 21:12:51.792431   40275 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0528 21:12:51.792435   40275 command_runner.go:130] > # image_volumes = "mkdir"
	I0528 21:12:51.792443   40275 command_runner.go:130] > # Temporary directory to use for storing big files
	I0528 21:12:51.792447   40275 command_runner.go:130] > # big_files_temporary_dir = ""
	I0528 21:12:51.792452   40275 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0528 21:12:51.792458   40275 command_runner.go:130] > # CNI plugins.
	I0528 21:12:51.792462   40275 command_runner.go:130] > [crio.network]
	I0528 21:12:51.792467   40275 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0528 21:12:51.792473   40275 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0528 21:12:51.792476   40275 command_runner.go:130] > # cni_default_network = ""
	I0528 21:12:51.792481   40275 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0528 21:12:51.792486   40275 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0528 21:12:51.792491   40275 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0528 21:12:51.792497   40275 command_runner.go:130] > # plugin_dirs = [
	I0528 21:12:51.792501   40275 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0528 21:12:51.792504   40275 command_runner.go:130] > # ]
	I0528 21:12:51.792510   40275 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0528 21:12:51.792515   40275 command_runner.go:130] > [crio.metrics]
	I0528 21:12:51.792520   40275 command_runner.go:130] > # Globally enable or disable metrics support.
	I0528 21:12:51.792524   40275 command_runner.go:130] > enable_metrics = true
	I0528 21:12:51.792528   40275 command_runner.go:130] > # Specify enabled metrics collectors.
	I0528 21:12:51.792534   40275 command_runner.go:130] > # Per default all metrics are enabled.
	I0528 21:12:51.792540   40275 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0528 21:12:51.792548   40275 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0528 21:12:51.792553   40275 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0528 21:12:51.792560   40275 command_runner.go:130] > # metrics_collectors = [
	I0528 21:12:51.792564   40275 command_runner.go:130] > # 	"operations",
	I0528 21:12:51.792570   40275 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0528 21:12:51.792574   40275 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0528 21:12:51.792580   40275 command_runner.go:130] > # 	"operations_errors",
	I0528 21:12:51.792584   40275 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0528 21:12:51.792588   40275 command_runner.go:130] > # 	"image_pulls_by_name",
	I0528 21:12:51.792592   40275 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0528 21:12:51.792598   40275 command_runner.go:130] > # 	"image_pulls_failures",
	I0528 21:12:51.792602   40275 command_runner.go:130] > # 	"image_pulls_successes",
	I0528 21:12:51.792607   40275 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0528 21:12:51.792612   40275 command_runner.go:130] > # 	"image_layer_reuse",
	I0528 21:12:51.792618   40275 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0528 21:12:51.792623   40275 command_runner.go:130] > # 	"containers_oom_total",
	I0528 21:12:51.792627   40275 command_runner.go:130] > # 	"containers_oom",
	I0528 21:12:51.792630   40275 command_runner.go:130] > # 	"processes_defunct",
	I0528 21:12:51.792634   40275 command_runner.go:130] > # 	"operations_total",
	I0528 21:12:51.792638   40275 command_runner.go:130] > # 	"operations_latency_seconds",
	I0528 21:12:51.792642   40275 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0528 21:12:51.792648   40275 command_runner.go:130] > # 	"operations_errors_total",
	I0528 21:12:51.792652   40275 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0528 21:12:51.792659   40275 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0528 21:12:51.792663   40275 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0528 21:12:51.792669   40275 command_runner.go:130] > # 	"image_pulls_success_total",
	I0528 21:12:51.792672   40275 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0528 21:12:51.792676   40275 command_runner.go:130] > # 	"containers_oom_count_total",
	I0528 21:12:51.792684   40275 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0528 21:12:51.792689   40275 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0528 21:12:51.792692   40275 command_runner.go:130] > # ]
	I0528 21:12:51.792697   40275 command_runner.go:130] > # The port on which the metrics server will listen.
	I0528 21:12:51.792703   40275 command_runner.go:130] > # metrics_port = 9090
	I0528 21:12:51.792708   40275 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0528 21:12:51.792713   40275 command_runner.go:130] > # metrics_socket = ""
	I0528 21:12:51.792718   40275 command_runner.go:130] > # The certificate for the secure metrics server.
	I0528 21:12:51.792726   40275 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0528 21:12:51.792732   40275 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0528 21:12:51.792740   40275 command_runner.go:130] > # certificate on any modification event.
	I0528 21:12:51.792744   40275 command_runner.go:130] > # metrics_cert = ""
	I0528 21:12:51.792749   40275 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0528 21:12:51.792755   40275 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0528 21:12:51.792758   40275 command_runner.go:130] > # metrics_key = ""
	I0528 21:12:51.792763   40275 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0528 21:12:51.792769   40275 command_runner.go:130] > [crio.tracing]
	I0528 21:12:51.792775   40275 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0528 21:12:51.792781   40275 command_runner.go:130] > # enable_tracing = false
	I0528 21:12:51.792786   40275 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0528 21:12:51.792792   40275 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0528 21:12:51.792798   40275 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0528 21:12:51.792805   40275 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0528 21:12:51.792809   40275 command_runner.go:130] > # CRI-O NRI configuration.
	I0528 21:12:51.792815   40275 command_runner.go:130] > [crio.nri]
	I0528 21:12:51.792819   40275 command_runner.go:130] > # Globally enable or disable NRI.
	I0528 21:12:51.792823   40275 command_runner.go:130] > # enable_nri = false
	I0528 21:12:51.792827   40275 command_runner.go:130] > # NRI socket to listen on.
	I0528 21:12:51.792833   40275 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0528 21:12:51.792838   40275 command_runner.go:130] > # NRI plugin directory to use.
	I0528 21:12:51.792844   40275 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0528 21:12:51.792849   40275 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0528 21:12:51.792854   40275 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0528 21:12:51.792861   40275 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0528 21:12:51.792865   40275 command_runner.go:130] > # nri_disable_connections = false
	I0528 21:12:51.792873   40275 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0528 21:12:51.792877   40275 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0528 21:12:51.792884   40275 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0528 21:12:51.792889   40275 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0528 21:12:51.792897   40275 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0528 21:12:51.792901   40275 command_runner.go:130] > [crio.stats]
	I0528 21:12:51.792909   40275 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0528 21:12:51.792914   40275 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0528 21:12:51.792920   40275 command_runner.go:130] > # stats_collection_period = 0
	I0528 21:12:51.792954   40275 command_runner.go:130] ! time="2024-05-28 21:12:51.748567936Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0528 21:12:51.792979   40275 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0528 21:12:51.793097   40275 cni.go:84] Creating CNI manager for ""
	I0528 21:12:51.793111   40275 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0528 21:12:51.793121   40275 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:12:51.793149   40275 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-869191 NodeName:multinode-869191 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 21:12:51.793280   40275 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-869191"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:12:51.793336   40275 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 21:12:51.804121   40275 command_runner.go:130] > kubeadm
	I0528 21:12:51.804142   40275 command_runner.go:130] > kubectl
	I0528 21:12:51.804146   40275 command_runner.go:130] > kubelet
	I0528 21:12:51.804167   40275 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:12:51.804238   40275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:12:51.814551   40275 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0528 21:12:51.832448   40275 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:12:51.849999   40275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0528 21:12:51.866911   40275 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0528 21:12:51.871087   40275 command_runner.go:130] > 192.168.39.65	control-plane.minikube.internal
	I0528 21:12:51.871166   40275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:12:52.009893   40275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:12:52.025381   40275 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191 for IP: 192.168.39.65
	I0528 21:12:52.025420   40275 certs.go:194] generating shared ca certs ...
	I0528 21:12:52.025440   40275 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:12:52.025642   40275 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 21:12:52.025703   40275 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 21:12:52.025719   40275 certs.go:256] generating profile certs ...
	I0528 21:12:52.025852   40275 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/client.key
	I0528 21:12:52.025953   40275 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/apiserver.key.f28ac419
	I0528 21:12:52.026004   40275 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/proxy-client.key
	I0528 21:12:52.026017   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 21:12:52.026033   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0528 21:12:52.026059   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 21:12:52.026076   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 21:12:52.026092   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 21:12:52.026111   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 21:12:52.026130   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 21:12:52.026144   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 21:12:52.026205   40275 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 21:12:52.026280   40275 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 21:12:52.026294   40275 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 21:12:52.026330   40275 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 21:12:52.026361   40275 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:12:52.026397   40275 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 21:12:52.026440   40275 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:12:52.026468   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /usr/share/ca-certificates/117602.pem
	I0528 21:12:52.026485   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:12:52.026497   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem -> /usr/share/ca-certificates/11760.pem
	I0528 21:12:52.027085   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:12:52.052972   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:12:52.077548   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:12:52.102642   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:12:52.127998   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 21:12:52.151885   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 21:12:52.175643   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:12:52.198635   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 21:12:52.222689   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 21:12:52.248536   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:12:52.274644   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 21:12:52.299251   40275 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:12:52.339602   40275 ssh_runner.go:195] Run: openssl version
	I0528 21:12:52.345751   40275 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0528 21:12:52.345836   40275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 21:12:52.356730   40275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 21:12:52.361922   40275 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:12:52.362069   40275 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:12:52.362122   40275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 21:12:52.368147   40275 command_runner.go:130] > 51391683
	I0528 21:12:52.368440   40275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 21:12:52.377946   40275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 21:12:52.388802   40275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 21:12:52.393314   40275 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:12:52.393442   40275 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:12:52.393481   40275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 21:12:52.399032   40275 command_runner.go:130] > 3ec20f2e
	I0528 21:12:52.399273   40275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 21:12:52.408513   40275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:12:52.419175   40275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:12:52.423781   40275 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:12:52.423805   40275 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:12:52.423835   40275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:12:52.430161   40275 command_runner.go:130] > b5213941
	I0528 21:12:52.430216   40275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:12:52.439365   40275 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:12:52.443652   40275 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:12:52.443667   40275 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0528 21:12:52.443673   40275 command_runner.go:130] > Device: 253,1	Inode: 8386582     Links: 1
	I0528 21:12:52.443682   40275 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0528 21:12:52.443692   40275 command_runner.go:130] > Access: 2024-05-28 21:06:39.699064296 +0000
	I0528 21:12:52.443701   40275 command_runner.go:130] > Modify: 2024-05-28 21:06:39.699064296 +0000
	I0528 21:12:52.443709   40275 command_runner.go:130] > Change: 2024-05-28 21:06:39.699064296 +0000
	I0528 21:12:52.443715   40275 command_runner.go:130] >  Birth: 2024-05-28 21:06:39.699064296 +0000
	I0528 21:12:52.443848   40275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 21:12:52.449812   40275 command_runner.go:130] > Certificate will not expire
	I0528 21:12:52.449862   40275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 21:12:52.455490   40275 command_runner.go:130] > Certificate will not expire
	I0528 21:12:52.455683   40275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 21:12:52.461362   40275 command_runner.go:130] > Certificate will not expire
	I0528 21:12:52.461645   40275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 21:12:52.467901   40275 command_runner.go:130] > Certificate will not expire
	I0528 21:12:52.467971   40275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 21:12:52.473712   40275 command_runner.go:130] > Certificate will not expire
	I0528 21:12:52.473796   40275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 21:12:52.480027   40275 command_runner.go:130] > Certificate will not expire
	I0528 21:12:52.480109   40275 kubeadm.go:391] StartCluster: {Name:multinode-869191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-869191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:12:52.480265   40275 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:12:52.480330   40275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:12:52.517096   40275 command_runner.go:130] > bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337
	I0528 21:12:52.517116   40275 command_runner.go:130] > 3fbc4ce7e67f1b2a0b85ca56319e69227c950c165c7baaa1a4430a7fae20be76
	I0528 21:12:52.517123   40275 command_runner.go:130] > c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055
	I0528 21:12:52.517128   40275 command_runner.go:130] > 6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac
	I0528 21:12:52.517133   40275 command_runner.go:130] > 4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0
	I0528 21:12:52.517138   40275 command_runner.go:130] > 1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458
	I0528 21:12:52.517143   40275 command_runner.go:130] > 64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5
	I0528 21:12:52.517150   40275 command_runner.go:130] > e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9
	I0528 21:12:52.518602   40275 cri.go:89] found id: "bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337"
	I0528 21:12:52.518621   40275 cri.go:89] found id: "3fbc4ce7e67f1b2a0b85ca56319e69227c950c165c7baaa1a4430a7fae20be76"
	I0528 21:12:52.518627   40275 cri.go:89] found id: "c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055"
	I0528 21:12:52.518632   40275 cri.go:89] found id: "6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac"
	I0528 21:12:52.518636   40275 cri.go:89] found id: "4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0"
	I0528 21:12:52.518641   40275 cri.go:89] found id: "1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458"
	I0528 21:12:52.518645   40275 cri.go:89] found id: "64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5"
	I0528 21:12:52.518649   40275 cri.go:89] found id: "e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9"
	I0528 21:12:52.518655   40275 cri.go:89] found id: ""
	I0528 21:12:52.518701   40275 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.481740343Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716930858481716820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9345b78-cee8-4a7c-b243-066d9574384f name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.482705320Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cb66fd6-1de1-4d99-869f-4b64e64ee459 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.482779034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cb66fd6-1de1-4d99-869f-4b64e64ee459 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.483272551Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d459e1e6230b2bc259c2cfe2705e236bd61bb34da278adef4636f8343fff8,PodSandboxId:d2f96bf8a39d494580400adabf53857c4e386c8dcb1362b00ab04d496415c96c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716930812503603829,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252b9a44a28e675b9f3ec426ddf2b2aba3af066f479de4092e3984bab9b4bdff,PodSandboxId:e476c463b3a6a2fdb96a62e5417d069fb550ac656070ad7b11b607ef9ca879a9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716930779044101717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4136b5bbb1fb11d4eb9b12e9bfb612b35c44bb147609cf3895c0d06f2e74fc6f,PodSandboxId:9f7fd13849b4d95056104af0680035bfb2c6849cd3148d0ebe3dd1506798fbaf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716930778892472387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc094a6daa47a8046b8497314256954e60b46b95bd2991e15338af3c3e6a9ae7,PodSandboxId:b14b96dafa937faa66fe5d1b341110baca885c8e88b87c34755c133a729a7db6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716930778838695092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]
string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72434233ddf0cf45536340a7aa617a6d64512d73e0811d01074b2a626d43f79c,PodSandboxId:468780c9e91e5b9a0a73de12ffeb3cfa868878a69a66c11fa7d029d39f2c2776,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716930778757028056,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b852ca44def88ca8173aed69919592580b7bd4d84c154208ea11acd9e2737eb9,PodSandboxId:59f350a36f234501e2aa4d79d488bf846f36b0ea20e18b685396a08a6b7fe36d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716930774961307938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acf49a269336371c288f456c763a8f428cec04ed7f1062c7205ec7389032a2a,PodSandboxId:cfece9dd614bcdc4525a9fde0db763cf742c17664fadf84b084efc3fb49bde24,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716930774986948869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067e3bfe9287fedd95a060e3d4b3b951427c0cb001493040d415db25f3f47d65,PodSandboxId:771af933b235ff9d38a752a3b25823afdfb643624815725a013a1b25c70e35f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716930775021769207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fff07e29b61aaaf721af4f63571b7d45d059365093210ebd0a2ce382f0244b5b,PodSandboxId:8441eeb6f8cc28fd102d0cc70272043bd2d8c7fbfd607fb802e5eef8e8f25bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716930774935473608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893ade671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ad8633ed09b8a5144fc6c780d4e37526722b597b04e9d62eddf8487685aced,PodSandboxId:2e646fda19ef38cd2073732544812dbcfe794b780fd168527446e897d29e03f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716930471845338505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337,PodSandboxId:352504b5fcfc2cffc7c153ce015bcfdb9670ecbdc6a29d12fadb53f64e0bfac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716930429420079030,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbc4ce7e67f1b2a0b85ca56319e69227c950c165c7baaa1a4430a7fae20be76,PodSandboxId:77ec75c669211ddf9014581b16023697624006e642344f3d51ff6671e4d5650a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716930429360324308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kubernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055,PodSandboxId:0a32aac2f1b58efbd4104d0e4ab1101ab2af143557828d77d3abb8c6f6dc588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716930427927354132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac,PodSandboxId:d775edc01835ffc1f7fbf18983e1140cea4768191758fa2ec6ab0906825250d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716930425428133632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458,PodSandboxId:7ffa32fc2e8314746f95abd726fe03996c5bd26dbef18b73fdd3d67583621694,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716930403955640868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0,PodSandboxId:bcc4adde8f6d091ae8330a77072b3c75c0932b884e7dde9737e40d71e6cf20c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716930403990649236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893a
de671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5,PodSandboxId:7b5f0784667d4632f452c9f886fe00afdcda711d49621d66dc4ffa5bcbe0992b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716930403917319856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9,PodSandboxId:814de41b91317a66ef1b490e7dbef6b3c9f38e667341f0fbac189a3fda9a4b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716930403846678107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cb66fd6-1de1-4d99-869f-4b64e64ee459 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.525493811Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e58f670-0f20-47be-8d14-80532cc5ed98 name=/runtime.v1.RuntimeService/Version
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.525771719Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e58f670-0f20-47be-8d14-80532cc5ed98 name=/runtime.v1.RuntimeService/Version
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.526749015Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2749681c-5576-4911-9cfd-194ad7995445 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.527498838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716930858527473866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2749681c-5576-4911-9cfd-194ad7995445 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.528108404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81712285-a477-4fa8-86b5-fe9936520eb6 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.528179311Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81712285-a477-4fa8-86b5-fe9936520eb6 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.528554195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d459e1e6230b2bc259c2cfe2705e236bd61bb34da278adef4636f8343fff8,PodSandboxId:d2f96bf8a39d494580400adabf53857c4e386c8dcb1362b00ab04d496415c96c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716930812503603829,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252b9a44a28e675b9f3ec426ddf2b2aba3af066f479de4092e3984bab9b4bdff,PodSandboxId:e476c463b3a6a2fdb96a62e5417d069fb550ac656070ad7b11b607ef9ca879a9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716930779044101717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4136b5bbb1fb11d4eb9b12e9bfb612b35c44bb147609cf3895c0d06f2e74fc6f,PodSandboxId:9f7fd13849b4d95056104af0680035bfb2c6849cd3148d0ebe3dd1506798fbaf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716930778892472387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc094a6daa47a8046b8497314256954e60b46b95bd2991e15338af3c3e6a9ae7,PodSandboxId:b14b96dafa937faa66fe5d1b341110baca885c8e88b87c34755c133a729a7db6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716930778838695092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]
string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72434233ddf0cf45536340a7aa617a6d64512d73e0811d01074b2a626d43f79c,PodSandboxId:468780c9e91e5b9a0a73de12ffeb3cfa868878a69a66c11fa7d029d39f2c2776,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716930778757028056,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b852ca44def88ca8173aed69919592580b7bd4d84c154208ea11acd9e2737eb9,PodSandboxId:59f350a36f234501e2aa4d79d488bf846f36b0ea20e18b685396a08a6b7fe36d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716930774961307938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acf49a269336371c288f456c763a8f428cec04ed7f1062c7205ec7389032a2a,PodSandboxId:cfece9dd614bcdc4525a9fde0db763cf742c17664fadf84b084efc3fb49bde24,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716930774986948869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067e3bfe9287fedd95a060e3d4b3b951427c0cb001493040d415db25f3f47d65,PodSandboxId:771af933b235ff9d38a752a3b25823afdfb643624815725a013a1b25c70e35f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716930775021769207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fff07e29b61aaaf721af4f63571b7d45d059365093210ebd0a2ce382f0244b5b,PodSandboxId:8441eeb6f8cc28fd102d0cc70272043bd2d8c7fbfd607fb802e5eef8e8f25bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716930774935473608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893ade671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ad8633ed09b8a5144fc6c780d4e37526722b597b04e9d62eddf8487685aced,PodSandboxId:2e646fda19ef38cd2073732544812dbcfe794b780fd168527446e897d29e03f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716930471845338505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337,PodSandboxId:352504b5fcfc2cffc7c153ce015bcfdb9670ecbdc6a29d12fadb53f64e0bfac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716930429420079030,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbc4ce7e67f1b2a0b85ca56319e69227c950c165c7baaa1a4430a7fae20be76,PodSandboxId:77ec75c669211ddf9014581b16023697624006e642344f3d51ff6671e4d5650a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716930429360324308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kubernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055,PodSandboxId:0a32aac2f1b58efbd4104d0e4ab1101ab2af143557828d77d3abb8c6f6dc588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716930427927354132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac,PodSandboxId:d775edc01835ffc1f7fbf18983e1140cea4768191758fa2ec6ab0906825250d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716930425428133632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458,PodSandboxId:7ffa32fc2e8314746f95abd726fe03996c5bd26dbef18b73fdd3d67583621694,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716930403955640868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0,PodSandboxId:bcc4adde8f6d091ae8330a77072b3c75c0932b884e7dde9737e40d71e6cf20c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716930403990649236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893a
de671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5,PodSandboxId:7b5f0784667d4632f452c9f886fe00afdcda711d49621d66dc4ffa5bcbe0992b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716930403917319856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9,PodSandboxId:814de41b91317a66ef1b490e7dbef6b3c9f38e667341f0fbac189a3fda9a4b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716930403846678107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81712285-a477-4fa8-86b5-fe9936520eb6 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.571103198Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1508a5fb-f6ed-4ebe-a428-ff9f7aed63a8 name=/runtime.v1.RuntimeService/Version
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.571243225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1508a5fb-f6ed-4ebe-a428-ff9f7aed63a8 name=/runtime.v1.RuntimeService/Version
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.572554265Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b361646f-012c-45dc-aebd-ecd0ddcc2ec5 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.572996393Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716930858572974108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b361646f-012c-45dc-aebd-ecd0ddcc2ec5 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.573553112Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=831a5d28-58f7-4307-816a-e293b76781d3 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.573632357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=831a5d28-58f7-4307-816a-e293b76781d3 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.573963619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d459e1e6230b2bc259c2cfe2705e236bd61bb34da278adef4636f8343fff8,PodSandboxId:d2f96bf8a39d494580400adabf53857c4e386c8dcb1362b00ab04d496415c96c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716930812503603829,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252b9a44a28e675b9f3ec426ddf2b2aba3af066f479de4092e3984bab9b4bdff,PodSandboxId:e476c463b3a6a2fdb96a62e5417d069fb550ac656070ad7b11b607ef9ca879a9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716930779044101717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4136b5bbb1fb11d4eb9b12e9bfb612b35c44bb147609cf3895c0d06f2e74fc6f,PodSandboxId:9f7fd13849b4d95056104af0680035bfb2c6849cd3148d0ebe3dd1506798fbaf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716930778892472387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc094a6daa47a8046b8497314256954e60b46b95bd2991e15338af3c3e6a9ae7,PodSandboxId:b14b96dafa937faa66fe5d1b341110baca885c8e88b87c34755c133a729a7db6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716930778838695092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]
string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72434233ddf0cf45536340a7aa617a6d64512d73e0811d01074b2a626d43f79c,PodSandboxId:468780c9e91e5b9a0a73de12ffeb3cfa868878a69a66c11fa7d029d39f2c2776,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716930778757028056,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b852ca44def88ca8173aed69919592580b7bd4d84c154208ea11acd9e2737eb9,PodSandboxId:59f350a36f234501e2aa4d79d488bf846f36b0ea20e18b685396a08a6b7fe36d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716930774961307938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acf49a269336371c288f456c763a8f428cec04ed7f1062c7205ec7389032a2a,PodSandboxId:cfece9dd614bcdc4525a9fde0db763cf742c17664fadf84b084efc3fb49bde24,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716930774986948869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067e3bfe9287fedd95a060e3d4b3b951427c0cb001493040d415db25f3f47d65,PodSandboxId:771af933b235ff9d38a752a3b25823afdfb643624815725a013a1b25c70e35f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716930775021769207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fff07e29b61aaaf721af4f63571b7d45d059365093210ebd0a2ce382f0244b5b,PodSandboxId:8441eeb6f8cc28fd102d0cc70272043bd2d8c7fbfd607fb802e5eef8e8f25bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716930774935473608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893ade671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ad8633ed09b8a5144fc6c780d4e37526722b597b04e9d62eddf8487685aced,PodSandboxId:2e646fda19ef38cd2073732544812dbcfe794b780fd168527446e897d29e03f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716930471845338505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337,PodSandboxId:352504b5fcfc2cffc7c153ce015bcfdb9670ecbdc6a29d12fadb53f64e0bfac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716930429420079030,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbc4ce7e67f1b2a0b85ca56319e69227c950c165c7baaa1a4430a7fae20be76,PodSandboxId:77ec75c669211ddf9014581b16023697624006e642344f3d51ff6671e4d5650a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716930429360324308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kubernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055,PodSandboxId:0a32aac2f1b58efbd4104d0e4ab1101ab2af143557828d77d3abb8c6f6dc588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716930427927354132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac,PodSandboxId:d775edc01835ffc1f7fbf18983e1140cea4768191758fa2ec6ab0906825250d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716930425428133632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458,PodSandboxId:7ffa32fc2e8314746f95abd726fe03996c5bd26dbef18b73fdd3d67583621694,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716930403955640868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0,PodSandboxId:bcc4adde8f6d091ae8330a77072b3c75c0932b884e7dde9737e40d71e6cf20c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716930403990649236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893a
de671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5,PodSandboxId:7b5f0784667d4632f452c9f886fe00afdcda711d49621d66dc4ffa5bcbe0992b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716930403917319856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9,PodSandboxId:814de41b91317a66ef1b490e7dbef6b3c9f38e667341f0fbac189a3fda9a4b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716930403846678107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=831a5d28-58f7-4307-816a-e293b76781d3 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.627164719Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dcc523e8-9fbf-4c80-a549-fa7688462672 name=/runtime.v1.RuntimeService/Version
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.627315332Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dcc523e8-9fbf-4c80-a549-fa7688462672 name=/runtime.v1.RuntimeService/Version
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.628537514Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a160008d-5407-4284-95d1-421e8fdee5f0 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.629041281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716930858629014265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a160008d-5407-4284-95d1-421e8fdee5f0 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.629530564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2f5671b-cf90-4411-8bf1-ff9c06d84a71 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.629611564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2f5671b-cf90-4411-8bf1-ff9c06d84a71 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:14:18 multinode-869191 crio[2885]: time="2024-05-28 21:14:18.630140236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d459e1e6230b2bc259c2cfe2705e236bd61bb34da278adef4636f8343fff8,PodSandboxId:d2f96bf8a39d494580400adabf53857c4e386c8dcb1362b00ab04d496415c96c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716930812503603829,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252b9a44a28e675b9f3ec426ddf2b2aba3af066f479de4092e3984bab9b4bdff,PodSandboxId:e476c463b3a6a2fdb96a62e5417d069fb550ac656070ad7b11b607ef9ca879a9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716930779044101717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4136b5bbb1fb11d4eb9b12e9bfb612b35c44bb147609cf3895c0d06f2e74fc6f,PodSandboxId:9f7fd13849b4d95056104af0680035bfb2c6849cd3148d0ebe3dd1506798fbaf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716930778892472387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc094a6daa47a8046b8497314256954e60b46b95bd2991e15338af3c3e6a9ae7,PodSandboxId:b14b96dafa937faa66fe5d1b341110baca885c8e88b87c34755c133a729a7db6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716930778838695092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]
string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72434233ddf0cf45536340a7aa617a6d64512d73e0811d01074b2a626d43f79c,PodSandboxId:468780c9e91e5b9a0a73de12ffeb3cfa868878a69a66c11fa7d029d39f2c2776,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716930778757028056,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b852ca44def88ca8173aed69919592580b7bd4d84c154208ea11acd9e2737eb9,PodSandboxId:59f350a36f234501e2aa4d79d488bf846f36b0ea20e18b685396a08a6b7fe36d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716930774961307938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acf49a269336371c288f456c763a8f428cec04ed7f1062c7205ec7389032a2a,PodSandboxId:cfece9dd614bcdc4525a9fde0db763cf742c17664fadf84b084efc3fb49bde24,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716930774986948869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067e3bfe9287fedd95a060e3d4b3b951427c0cb001493040d415db25f3f47d65,PodSandboxId:771af933b235ff9d38a752a3b25823afdfb643624815725a013a1b25c70e35f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716930775021769207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fff07e29b61aaaf721af4f63571b7d45d059365093210ebd0a2ce382f0244b5b,PodSandboxId:8441eeb6f8cc28fd102d0cc70272043bd2d8c7fbfd607fb802e5eef8e8f25bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716930774935473608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893ade671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ad8633ed09b8a5144fc6c780d4e37526722b597b04e9d62eddf8487685aced,PodSandboxId:2e646fda19ef38cd2073732544812dbcfe794b780fd168527446e897d29e03f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716930471845338505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337,PodSandboxId:352504b5fcfc2cffc7c153ce015bcfdb9670ecbdc6a29d12fadb53f64e0bfac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716930429420079030,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbc4ce7e67f1b2a0b85ca56319e69227c950c165c7baaa1a4430a7fae20be76,PodSandboxId:77ec75c669211ddf9014581b16023697624006e642344f3d51ff6671e4d5650a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716930429360324308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kubernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055,PodSandboxId:0a32aac2f1b58efbd4104d0e4ab1101ab2af143557828d77d3abb8c6f6dc588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716930427927354132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac,PodSandboxId:d775edc01835ffc1f7fbf18983e1140cea4768191758fa2ec6ab0906825250d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716930425428133632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458,PodSandboxId:7ffa32fc2e8314746f95abd726fe03996c5bd26dbef18b73fdd3d67583621694,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716930403955640868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0,PodSandboxId:bcc4adde8f6d091ae8330a77072b3c75c0932b884e7dde9737e40d71e6cf20c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716930403990649236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893a
de671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5,PodSandboxId:7b5f0784667d4632f452c9f886fe00afdcda711d49621d66dc4ffa5bcbe0992b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716930403917319856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9,PodSandboxId:814de41b91317a66ef1b490e7dbef6b3c9f38e667341f0fbac189a3fda9a4b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716930403846678107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2f5671b-cf90-4411-8bf1-ff9c06d84a71 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	468d459e1e623       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      46 seconds ago       Running             busybox                   1                   d2f96bf8a39d4       busybox-fc5497c4f-qqxb7
	252b9a44a28e6       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               1                   e476c463b3a6a       kindnet-24k26
	4136b5bbb1fb1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   9f7fd13849b4d       coredns-7db6d8ff4d-mj9rx
	dc094a6daa47a       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      About a minute ago   Running             kube-proxy                1                   b14b96dafa937       kube-proxy-sj7k8
	72434233ddf0c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   468780c9e91e5       storage-provisioner
	067e3bfe9287f       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   1                   771af933b235f       kube-controller-manager-multinode-869191
	3acf49a269336       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   cfece9dd614bc       etcd-multinode-869191
	b852ca44def88       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      About a minute ago   Running             kube-scheduler            1                   59f350a36f234       kube-scheduler-multinode-869191
	fff07e29b61aa       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            1                   8441eeb6f8cc2       kube-apiserver-multinode-869191
	b3ad8633ed09b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   2e646fda19ef3       busybox-fc5497c4f-qqxb7
	bfc4c2fb4e8cc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   352504b5fcfc2       coredns-7db6d8ff4d-mj9rx
	3fbc4ce7e67f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   77ec75c669211       storage-provisioner
	c3c2b6923bfc3       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    7 minutes ago        Exited              kindnet-cni               0                   0a32aac2f1b58       kindnet-24k26
	6025504364d6e       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago        Exited              kube-proxy                0                   d775edc01835f       kube-proxy-sj7k8
	4952f4946567c       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago        Exited              kube-apiserver            0                   bcc4adde8f6d0       kube-apiserver-multinode-869191
	1aa37e66c1574       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago        Exited              kube-scheduler            0                   7ffa32fc2e831       kube-scheduler-multinode-869191
	64b17a6d3213b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   7b5f0784667d4       etcd-multinode-869191
	e2197d4ac3e76       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago        Exited              kube-controller-manager   0                   814de41b91317       kube-controller-manager-multinode-869191
	
	
	==> coredns [4136b5bbb1fb11d4eb9b12e9bfb612b35c44bb147609cf3895c0d06f2e74fc6f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35187 - 56157 "HINFO IN 1220250268852440767.8569700546674494853. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008912279s
	
	
	==> coredns [bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337] <==
	[INFO] 10.244.0.3:53899 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00156546s
	[INFO] 10.244.0.3:40765 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000048349s
	[INFO] 10.244.0.3:34881 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037411s
	[INFO] 10.244.0.3:48787 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001163899s
	[INFO] 10.244.0.3:57425 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094486s
	[INFO] 10.244.0.3:36844 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000031395s
	[INFO] 10.244.0.3:39117 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000028768s
	[INFO] 10.244.1.2:49235 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011624s
	[INFO] 10.244.1.2:59719 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105642s
	[INFO] 10.244.1.2:58585 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064667s
	[INFO] 10.244.1.2:33081 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054628s
	[INFO] 10.244.0.3:59307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112683s
	[INFO] 10.244.0.3:51157 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079759s
	[INFO] 10.244.0.3:32830 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062854s
	[INFO] 10.244.0.3:59588 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067153s
	[INFO] 10.244.1.2:53725 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231853s
	[INFO] 10.244.1.2:56138 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153165s
	[INFO] 10.244.1.2:53150 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000144621s
	[INFO] 10.244.1.2:58929 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139937s
	[INFO] 10.244.0.3:49565 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000073314s
	[INFO] 10.244.0.3:43790 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000039622s
	[INFO] 10.244.0.3:58158 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000043412s
	[INFO] 10.244.0.3:58376 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000033006s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-869191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-869191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=multinode-869191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T21_06_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:06:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-869191
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:14:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:12:58 +0000   Tue, 28 May 2024 21:06:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:12:58 +0000   Tue, 28 May 2024 21:06:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:12:58 +0000   Tue, 28 May 2024 21:06:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:12:58 +0000   Tue, 28 May 2024 21:07:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    multinode-869191
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f68408ecc464d1d950bbc1d9e9539d7
	  System UUID:                9f68408e-cc46-4d1d-950b-bc1d9e9539d7
	  Boot ID:                    10994f05-03c3-4424-8036-ffdd7c4224ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qqxb7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 coredns-7db6d8ff4d-mj9rx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m15s
	  kube-system                 etcd-multinode-869191                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m29s
	  kube-system                 kindnet-24k26                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m15s
	  kube-system                 kube-apiserver-multinode-869191             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-controller-manager-multinode-869191    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-proxy-sj7k8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-scheduler-multinode-869191             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m13s                  kube-proxy       
	  Normal  Starting                 79s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  7m35s (x8 over 7m35s)  kubelet          Node multinode-869191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m35s (x8 over 7m35s)  kubelet          Node multinode-869191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m35s (x7 over 7m35s)  kubelet          Node multinode-869191 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m29s                  kubelet          Node multinode-869191 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  7m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m29s                  kubelet          Node multinode-869191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m29s                  kubelet          Node multinode-869191 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m29s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m16s                  node-controller  Node multinode-869191 event: Registered Node multinode-869191 in Controller
	  Normal  NodeReady                7m10s                  kubelet          Node multinode-869191 status is now: NodeReady
	  Normal  Starting                 84s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)      kubelet          Node multinode-869191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)      kubelet          Node multinode-869191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x7 over 84s)      kubelet          Node multinode-869191 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           68s                    node-controller  Node multinode-869191 event: Registered Node multinode-869191 in Controller
	
	
	Name:               multinode-869191-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-869191-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=multinode-869191
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T21_13_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:13:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-869191-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:14:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:14:09 +0000   Tue, 28 May 2024 21:13:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:14:09 +0000   Tue, 28 May 2024 21:13:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:14:09 +0000   Tue, 28 May 2024 21:13:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:14:09 +0000   Tue, 28 May 2024 21:13:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    multinode-869191-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3fb5ce73eb7043c7bf039afda03c6296
	  System UUID:                3fb5ce73-eb70-43c7-bf03-9afda03c6296
	  Boot ID:                    3f29dd25-66c9-4380-afd3-8e6f3230aa31
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hz7j8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kindnet-72k82              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m42s
	  kube-system                 kube-proxy-k7csx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m37s                  kube-proxy  
	  Normal  Starting                 35s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m43s (x2 over 6m43s)  kubelet     Node multinode-869191-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m43s (x2 over 6m43s)  kubelet     Node multinode-869191-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m43s (x2 over 6m43s)  kubelet     Node multinode-869191-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m43s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m33s                  kubelet     Node multinode-869191-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  41s (x2 over 41s)      kubelet     Node multinode-869191-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x2 over 41s)      kubelet     Node multinode-869191-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x2 over 41s)      kubelet     Node multinode-869191-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                31s                    kubelet     Node multinode-869191-m02 status is now: NodeReady
	
	
	Name:               multinode-869191-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-869191-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=multinode-869191
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T21_14_08_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:14:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-869191-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:14:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:14:15 +0000   Tue, 28 May 2024 21:14:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:14:15 +0000   Tue, 28 May 2024 21:14:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:14:15 +0000   Tue, 28 May 2024 21:14:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:14:15 +0000   Tue, 28 May 2024 21:14:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.154
	  Hostname:    multinode-869191-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8650d36c9cc4da1959ac90ace01a077
	  System UUID:                d8650d36-c9cc-4da1-959a-c90ace01a077
	  Boot ID:                    c311b963-c27d-40f4-9c93-c5cad4a86cc6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-vw26c       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m55s
	  kube-system                 kube-proxy-z5bd7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m49s                  kube-proxy  
	  Normal  Starting                 8s                     kube-proxy  
	  Normal  Starting                 5m10s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  5m55s (x2 over 5m55s)  kubelet     Node multinode-869191-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s (x2 over 5m55s)  kubelet     Node multinode-869191-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s (x2 over 5m55s)  kubelet     Node multinode-869191-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m45s                  kubelet     Node multinode-869191-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m16s (x2 over 5m16s)  kubelet     Node multinode-869191-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m16s (x2 over 5m16s)  kubelet     Node multinode-869191-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m16s (x2 over 5m16s)  kubelet     Node multinode-869191-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m7s                   kubelet     Node multinode-869191-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12s (x2 over 12s)      kubelet     Node multinode-869191-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 12s)      kubelet     Node multinode-869191-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 12s)      kubelet     Node multinode-869191-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-869191-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.059236] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061545] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.177352] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.112604] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.256655] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.077738] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.677772] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.062683] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.479704] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.069056] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.219958] kauditd_printk_skb: 18 callbacks suppressed
	[May28 21:07] systemd-fstab-generator[1469]: Ignoring "noauto" option for root device
	[  +5.423914] kauditd_printk_skb: 56 callbacks suppressed
	[ +40.408980] kauditd_printk_skb: 16 callbacks suppressed
	[May28 21:12] systemd-fstab-generator[2800]: Ignoring "noauto" option for root device
	[  +0.143911] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.177152] systemd-fstab-generator[2827]: Ignoring "noauto" option for root device
	[  +0.143813] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.279522] systemd-fstab-generator[2868]: Ignoring "noauto" option for root device
	[  +1.335847] systemd-fstab-generator[2970]: Ignoring "noauto" option for root device
	[  +2.123244] systemd-fstab-generator[3095]: Ignoring "noauto" option for root device
	[  +1.006466] kauditd_printk_skb: 164 callbacks suppressed
	[May28 21:13] kauditd_printk_skb: 52 callbacks suppressed
	[  +3.037636] systemd-fstab-generator[3915]: Ignoring "noauto" option for root device
	[ +18.402282] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [3acf49a269336371c288f456c763a8f428cec04ed7f1062c7205ec7389032a2a] <==
	{"level":"info","ts":"2024-05-28T21:12:55.633296Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T21:12:55.633544Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T21:12:55.634288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 switched to configuration voters=(13943064398224023591)"}
	{"level":"info","ts":"2024-05-28T21:12:55.638524Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f0d16ed1ce05ac0e","local-member-id":"c17fb7325889e027","added-peer-id":"c17fb7325889e027","added-peer-peer-urls":["https://192.168.39.65:2380"]}
	{"level":"info","ts":"2024-05-28T21:12:55.638954Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0d16ed1ce05ac0e","local-member-id":"c17fb7325889e027","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:12:55.641309Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:12:55.647965Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-28T21:12:55.648263Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c17fb7325889e027","initial-advertise-peer-urls":["https://192.168.39.65:2380"],"listen-peer-urls":["https://192.168.39.65:2380"],"advertise-client-urls":["https://192.168.39.65:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.65:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T21:12:55.648312Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T21:12:55.649471Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.65:2380"}
	{"level":"info","ts":"2024-05-28T21:12:55.649507Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.65:2380"}
	{"level":"info","ts":"2024-05-28T21:12:56.843955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-28T21:12:56.844063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-28T21:12:56.844141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 received MsgPreVoteResp from c17fb7325889e027 at term 2"}
	{"level":"info","ts":"2024-05-28T21:12:56.844176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 became candidate at term 3"}
	{"level":"info","ts":"2024-05-28T21:12:56.844272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 received MsgVoteResp from c17fb7325889e027 at term 3"}
	{"level":"info","ts":"2024-05-28T21:12:56.844302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 became leader at term 3"}
	{"level":"info","ts":"2024-05-28T21:12:56.844333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c17fb7325889e027 elected leader c17fb7325889e027 at term 3"}
	{"level":"info","ts":"2024-05-28T21:12:56.851884Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c17fb7325889e027","local-member-attributes":"{Name:multinode-869191 ClientURLs:[https://192.168.39.65:2379]}","request-path":"/0/members/c17fb7325889e027/attributes","cluster-id":"f0d16ed1ce05ac0e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T21:12:56.852111Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:12:56.852343Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T21:12:56.852386Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T21:12:56.852796Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:12:56.854817Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-28T21:12:56.854989Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.65:2379"}
	
	
	==> etcd [64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5] <==
	{"level":"warn","ts":"2024-05-28T21:08:24.907181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.187322ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16152036647794359588 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-869191-m03.17d3c33280d8ce0b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-869191-m03.17d3c33280d8ce0b\" value_size:646 lease:6928664610939583525 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-05-28T21:08:24.907483Z","caller":"traceutil/trace.go:171","msg":"trace[364224060] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"187.529019ms","start":"2024-05-28T21:08:24.719923Z","end":"2024-05-28T21:08:24.907452Z","steps":["trace[364224060] 'process raft request'  (duration: 187.487069ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:08:24.907621Z","caller":"traceutil/trace.go:171","msg":"trace[1714664065] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"259.527417ms","start":"2024-05-28T21:08:24.648087Z","end":"2024-05-28T21:08:24.907614Z","steps":["trace[1714664065] 'process raft request'  (duration: 78.805916ms)","trace[1714664065] 'compare'  (duration: 180.035866ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T21:08:24.907783Z","caller":"traceutil/trace.go:171","msg":"trace[1256363740] linearizableReadLoop","detail":"{readStateIndex:637; appliedIndex:636; }","duration":"256.860934ms","start":"2024-05-28T21:08:24.650909Z","end":"2024-05-28T21:08:24.90777Z","steps":["trace[1256363740] 'read index received'  (duration: 75.991619ms)","trace[1256363740] 'applied index is now lower than readState.Index'  (duration: 180.868335ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T21:08:24.907953Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.035675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-869191-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-05-28T21:08:24.909287Z","caller":"traceutil/trace.go:171","msg":"trace[273515262] range","detail":"{range_begin:/registry/minions/multinode-869191-m03; range_end:; response_count:1; response_revision:606; }","duration":"258.38173ms","start":"2024-05-28T21:08:24.65089Z","end":"2024-05-28T21:08:24.909272Z","steps":["trace[273515262] 'agreement among raft nodes before linearized reading'  (duration: 256.977637ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:08:24.909453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.754218ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-869191-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-05-28T21:08:24.909498Z","caller":"traceutil/trace.go:171","msg":"trace[1466748700] range","detail":"{range_begin:/registry/minions/multinode-869191-m03; range_end:; response_count:1; response_revision:606; }","duration":"117.823569ms","start":"2024-05-28T21:08:24.791667Z","end":"2024-05-28T21:08:24.90949Z","steps":["trace[1466748700] 'agreement among raft nodes before linearized reading'  (duration: 117.755921ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:08:29.75937Z","caller":"traceutil/trace.go:171","msg":"trace[1910362574] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"108.725124ms","start":"2024-05-28T21:08:29.650624Z","end":"2024-05-28T21:08:29.759349Z","steps":["trace[1910362574] 'process raft request'  (duration: 108.518862ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:08:30.057847Z","caller":"traceutil/trace.go:171","msg":"trace[1654002351] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"200.337379ms","start":"2024-05-28T21:08:29.857486Z","end":"2024-05-28T21:08:30.057824Z","steps":["trace[1654002351] 'process raft request'  (duration: 200.187192ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:08:30.058183Z","caller":"traceutil/trace.go:171","msg":"trace[1374969171] linearizableReadLoop","detail":"{readStateIndex:681; appliedIndex:681; }","duration":"107.325907ms","start":"2024-05-28T21:08:29.950841Z","end":"2024-05-28T21:08:30.058167Z","steps":["trace[1374969171] 'read index received'  (duration: 107.318989ms)","trace[1374969171] 'applied index is now lower than readState.Index'  (duration: 5.506µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T21:08:30.058437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.579197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-05-28T21:08:30.058487Z","caller":"traceutil/trace.go:171","msg":"trace[726718063] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:644; }","duration":"107.659346ms","start":"2024-05-28T21:08:29.950817Z","end":"2024-05-28T21:08:30.058477Z","steps":["trace[726718063] 'agreement among raft nodes before linearized reading'  (duration: 107.506163ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:08:30.070306Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.1769ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-869191-m03\" ","response":"range_response_count:1 size:2953"}
	{"level":"info","ts":"2024-05-28T21:08:30.070384Z","caller":"traceutil/trace.go:171","msg":"trace[590012406] range","detail":"{range_begin:/registry/minions/multinode-869191-m03; range_end:; response_count:1; response_revision:645; }","duration":"108.289157ms","start":"2024-05-28T21:08:29.962086Z","end":"2024-05-28T21:08:30.070376Z","steps":["trace[590012406] 'agreement among raft nodes before linearized reading'  (duration: 107.747507ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:11:18.423239Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-28T21:11:18.423387Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-869191","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.65:2380"],"advertise-client-urls":["https://192.168.39.65:2379"]}
	{"level":"warn","ts":"2024-05-28T21:11:18.423502Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T21:11:18.423647Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T21:11:18.47532Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.65:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T21:11:18.475401Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.65:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-28T21:11:18.475527Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c17fb7325889e027","current-leader-member-id":"c17fb7325889e027"}
	{"level":"info","ts":"2024-05-28T21:11:18.480191Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.65:2380"}
	{"level":"info","ts":"2024-05-28T21:11:18.480372Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.65:2380"}
	{"level":"info","ts":"2024-05-28T21:11:18.480397Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-869191","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.65:2380"],"advertise-client-urls":["https://192.168.39.65:2379"]}
	
	
	==> kernel <==
	 21:14:19 up 8 min,  0 users,  load average: 0.41, 0.32, 0.16
	Linux multinode-869191 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [252b9a44a28e675b9f3ec426ddf2b2aba3af066f479de4092e3984bab9b4bdff] <==
	I0528 21:13:29.805365       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	I0528 21:13:39.817583       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:13:39.817619       1 main.go:227] handling current node
	I0528 21:13:39.817630       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:13:39.817635       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:13:39.817747       1 main.go:223] Handling node with IPs: map[192.168.39.154:{}]
	I0528 21:13:39.817755       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	I0528 21:13:49.828639       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:13:49.828673       1 main.go:227] handling current node
	I0528 21:13:49.828683       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:13:49.828688       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:13:49.828796       1 main.go:223] Handling node with IPs: map[192.168.39.154:{}]
	I0528 21:13:49.828817       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	I0528 21:13:59.843904       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:13:59.844190       1 main.go:227] handling current node
	I0528 21:13:59.844325       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:13:59.844347       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:13:59.844592       1 main.go:223] Handling node with IPs: map[192.168.39.154:{}]
	I0528 21:13:59.844614       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	I0528 21:14:09.855109       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:14:09.855178       1 main.go:227] handling current node
	I0528 21:14:09.855303       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:14:09.855333       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:14:09.855544       1 main.go:223] Handling node with IPs: map[192.168.39.154:{}]
	I0528 21:14:09.855582       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055] <==
	I0528 21:10:28.776193       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	I0528 21:10:38.780285       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:10:38.780371       1 main.go:227] handling current node
	I0528 21:10:38.780395       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:10:38.780417       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:10:38.780541       1 main.go:223] Handling node with IPs: map[192.168.39.154:{}]
	I0528 21:10:38.780562       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	I0528 21:10:48.794630       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:10:48.794715       1 main.go:227] handling current node
	I0528 21:10:48.794740       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:10:48.794756       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:10:48.794878       1 main.go:223] Handling node with IPs: map[192.168.39.154:{}]
	I0528 21:10:48.794898       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	I0528 21:10:58.807105       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:10:58.807192       1 main.go:227] handling current node
	I0528 21:10:58.807290       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:10:58.807308       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:10:58.807425       1 main.go:223] Handling node with IPs: map[192.168.39.154:{}]
	I0528 21:10:58.807444       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	I0528 21:11:08.817281       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:11:08.817418       1 main.go:227] handling current node
	I0528 21:11:08.817491       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:11:08.817524       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:11:08.817639       1 main.go:223] Handling node with IPs: map[192.168.39.154:{}]
	I0528 21:11:08.817659       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0] <==
	I0528 21:11:18.442928       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0528 21:11:18.438798       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0528 21:11:18.439125       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0528 21:11:18.439169       1 controller.go:157] Shutting down quota evaluator
	I0528 21:11:18.443498       1 controller.go:176] quota evaluator worker shutdown
	I0528 21:11:18.439570       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0528 21:11:18.443582       1 controller.go:176] quota evaluator worker shutdown
	I0528 21:11:18.443605       1 controller.go:176] quota evaluator worker shutdown
	I0528 21:11:18.443626       1 controller.go:176] quota evaluator worker shutdown
	I0528 21:11:18.443648       1 controller.go:176] quota evaluator worker shutdown
	I0528 21:11:18.446130       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0528 21:11:18.451052       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0528 21:11:18.456462       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457089       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457186       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457373       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457430       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457480       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457532       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457590       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457642       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457693       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457744       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457801       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457861       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fff07e29b61aaaf721af4f63571b7d45d059365093210ebd0a2ce382f0244b5b] <==
	I0528 21:12:58.175289       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0528 21:12:58.181608       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 21:12:58.181639       1 policy_source.go:224] refreshing policies
	I0528 21:12:58.182766       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0528 21:12:58.211042       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0528 21:12:58.211259       1 aggregator.go:165] initial CRD sync complete...
	I0528 21:12:58.211291       1 autoregister_controller.go:141] Starting autoregister controller
	I0528 21:12:58.211315       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0528 21:12:58.211337       1 cache.go:39] Caches are synced for autoregister controller
	I0528 21:12:58.267580       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0528 21:12:58.267655       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0528 21:12:58.267762       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0528 21:12:58.268961       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0528 21:12:58.269172       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0528 21:12:58.271307       1 shared_informer.go:320] Caches are synced for configmaps
	I0528 21:12:58.279171       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0528 21:12:58.293594       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0528 21:12:59.085034       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0528 21:13:00.237130       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0528 21:13:00.364439       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0528 21:13:00.375833       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0528 21:13:00.450903       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0528 21:13:00.457094       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0528 21:13:10.933150       1 controller.go:615] quota admission added evaluator for: endpoints
	I0528 21:13:10.989135       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [067e3bfe9287fedd95a060e3d4b3b951427c0cb001493040d415db25f3f47d65] <==
	I0528 21:13:11.392399       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0528 21:13:11.398699       1 shared_informer.go:320] Caches are synced for garbage collector
	I0528 21:13:34.250561       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.876665ms"
	I0528 21:13:34.257749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.091253ms"
	I0528 21:13:34.275615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.818233ms"
	I0528 21:13:34.275704       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.038µs"
	I0528 21:13:38.729899       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-869191-m02\" does not exist"
	I0528 21:13:38.744953       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-869191-m02" podCIDRs=["10.244.1.0/24"]
	I0528 21:13:40.616988       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.799µs"
	I0528 21:13:40.653597       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.271µs"
	I0528 21:13:40.662162       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.326µs"
	I0528 21:13:40.676488       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.071µs"
	I0528 21:13:40.680685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.715µs"
	I0528 21:13:40.682757       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.61µs"
	I0528 21:13:41.100322       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.908µs"
	I0528 21:13:48.290359       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:13:48.312846       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.339µs"
	I0528 21:13:48.325940       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.248µs"
	I0528 21:13:52.135754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.118332ms"
	I0528 21:13:52.136249       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="133.386µs"
	I0528 21:14:06.367150       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:14:07.574497       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-869191-m03\" does not exist"
	I0528 21:14:07.574564       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:14:07.585133       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-869191-m03" podCIDRs=["10.244.2.0/24"]
	I0528 21:14:15.683489       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	
	
	==> kube-controller-manager [e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9] <==
	I0528 21:07:37.002764       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-869191-m02" podCIDRs=["10.244.1.0/24"]
	I0528 21:07:37.627379       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-869191-m02"
	I0528 21:07:46.249967       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:07:48.598882       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.168703ms"
	I0528 21:07:48.618369       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.427886ms"
	I0528 21:07:48.618540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.25µs"
	I0528 21:07:48.620903       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.239µs"
	I0528 21:07:48.630357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.275µs"
	I0528 21:07:52.484183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.363408ms"
	I0528 21:07:52.484573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.883µs"
	I0528 21:07:52.799148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.140135ms"
	I0528 21:07:52.799608       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.684µs"
	I0528 21:08:24.911318       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:08:24.912304       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-869191-m03\" does not exist"
	I0528 21:08:24.924294       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-869191-m03" podCIDRs=["10.244.2.0/24"]
	I0528 21:08:27.650173       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-869191-m03"
	I0528 21:08:34.768573       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:09:02.886526       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:09:03.891717       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-869191-m03\" does not exist"
	I0528 21:09:03.891917       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:09:03.903192       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-869191-m03" podCIDRs=["10.244.3.0/24"]
	I0528 21:09:12.763707       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:09:52.699606       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m03"
	I0528 21:09:52.748324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.959311ms"
	I0528 21:09:52.748628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.126µs"
	
	
	==> kube-proxy [6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac] <==
	I0528 21:07:05.559517       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:07:05.572942       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.65"]
	I0528 21:07:05.614660       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 21:07:05.614705       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 21:07:05.614719       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:07:05.617550       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:07:05.617824       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:07:05.617870       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:07:05.619455       1 config.go:192] "Starting service config controller"
	I0528 21:07:05.619508       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:07:05.619551       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:07:05.619568       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:07:05.620149       1 config.go:319] "Starting node config controller"
	I0528 21:07:05.620260       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:07:05.720150       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 21:07:05.720259       1 shared_informer.go:320] Caches are synced for service config
	I0528 21:07:05.720355       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [dc094a6daa47a8046b8497314256954e60b46b95bd2991e15338af3c3e6a9ae7] <==
	I0528 21:12:59.075059       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:12:59.104371       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.65"]
	I0528 21:12:59.195420       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 21:12:59.195485       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 21:12:59.195503       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:12:59.206311       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:12:59.206509       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:12:59.206524       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:12:59.210832       1 config.go:192] "Starting service config controller"
	I0528 21:12:59.210868       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:12:59.210888       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:12:59.210892       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:12:59.211293       1 config.go:319] "Starting node config controller"
	I0528 21:12:59.211299       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:12:59.311908       1 shared_informer.go:320] Caches are synced for node config
	I0528 21:12:59.311953       1 shared_informer.go:320] Caches are synced for service config
	I0528 21:12:59.311980       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458] <==
	W0528 21:06:47.608292       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 21:06:47.608321       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 21:06:47.620383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 21:06:47.620483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 21:06:47.631881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 21:06:47.631954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0528 21:06:47.682435       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 21:06:47.682558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0528 21:06:47.737261       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 21:06:47.737291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0528 21:06:47.875486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 21:06:47.875603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 21:06:47.890467       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 21:06:47.890557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 21:06:47.897674       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 21:06:47.897827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0528 21:06:47.992863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0528 21:06:47.992911       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0528 21:06:48.215145       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 21:06:48.215191       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0528 21:06:50.250440       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 21:11:18.414139       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0528 21:11:18.414415       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0528 21:11:18.414753       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0528 21:11:18.415737       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b852ca44def88ca8173aed69919592580b7bd4d84c154208ea11acd9e2737eb9] <==
	I0528 21:12:55.833541       1 serving.go:380] Generated self-signed cert in-memory
	W0528 21:12:58.130847       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 21:12:58.130950       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:12:58.130979       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 21:12:58.131008       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 21:12:58.185388       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 21:12:58.185431       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:12:58.190384       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0528 21:12:58.192310       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0528 21:12:58.192379       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 21:12:58.192425       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 21:12:58.293266       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 21:12:55 multinode-869191 kubelet[3102]: I0528 21:12:55.784897    3102 kubelet_node_status.go:73] "Attempting to register node" node="multinode-869191"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.231041    3102 kubelet_node_status.go:112] "Node was previously registered" node="multinode-869191"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.231295    3102 kubelet_node_status.go:76] "Successfully registered node" node="multinode-869191"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.233158    3102 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.234569    3102 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.244094    3102 apiserver.go:52] "Watching apiserver"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.248432    3102 topology_manager.go:215] "Topology Admit Handler" podUID="fdacf113-fef4-4a34-af75-2a7908dca02f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mj9rx"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.248555    3102 topology_manager.go:215] "Topology Admit Handler" podUID="59c6483f-f65f-490c-8b1e-7b0b425a80cf" podNamespace="kube-system" podName="kindnet-24k26"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.248679    3102 topology_manager.go:215] "Topology Admit Handler" podUID="9619acba-a019-4080-8c86-f63e7ce399bb" podNamespace="kube-system" podName="kube-proxy-sj7k8"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.248741    3102 topology_manager.go:215] "Topology Admit Handler" podUID="29c00081-275d-4209-bf8a-74849ccf882c" podNamespace="kube-system" podName="storage-provisioner"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.248787    3102 topology_manager.go:215] "Topology Admit Handler" podUID="f8887a9a-26fd-42dd-b3c5-9ff88f628dae" podNamespace="default" podName="busybox-fc5497c4f-qqxb7"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.259660    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/59c6483f-f65f-490c-8b1e-7b0b425a80cf-cni-cfg\") pod \"kindnet-24k26\" (UID: \"59c6483f-f65f-490c-8b1e-7b0b425a80cf\") " pod="kube-system/kindnet-24k26"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.259703    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/29c00081-275d-4209-bf8a-74849ccf882c-tmp\") pod \"storage-provisioner\" (UID: \"29c00081-275d-4209-bf8a-74849ccf882c\") " pod="kube-system/storage-provisioner"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.259738    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59c6483f-f65f-490c-8b1e-7b0b425a80cf-xtables-lock\") pod \"kindnet-24k26\" (UID: \"59c6483f-f65f-490c-8b1e-7b0b425a80cf\") " pod="kube-system/kindnet-24k26"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.259762    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59c6483f-f65f-490c-8b1e-7b0b425a80cf-lib-modules\") pod \"kindnet-24k26\" (UID: \"59c6483f-f65f-490c-8b1e-7b0b425a80cf\") " pod="kube-system/kindnet-24k26"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.260004    3102 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.360163    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9619acba-a019-4080-8c86-f63e7ce399bb-lib-modules\") pod \"kube-proxy-sj7k8\" (UID: \"9619acba-a019-4080-8c86-f63e7ce399bb\") " pod="kube-system/kube-proxy-sj7k8"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.360369    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9619acba-a019-4080-8c86-f63e7ce399bb-xtables-lock\") pod \"kube-proxy-sj7k8\" (UID: \"9619acba-a019-4080-8c86-f63e7ce399bb\") " pod="kube-system/kube-proxy-sj7k8"
	May 28 21:13:00 multinode-869191 kubelet[3102]: I0528 21:13:00.417763    3102 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 28 21:13:08 multinode-869191 kubelet[3102]: I0528 21:13:08.259869    3102 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 28 21:13:54 multinode-869191 kubelet[3102]: E0528 21:13:54.347430    3102 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 21:13:54 multinode-869191 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 21:13:54 multinode-869191 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 21:13:54 multinode-869191 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 21:13:54 multinode-869191 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:14:18.196819   41339 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18966-3963/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-869191 -n multinode-869191
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-869191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (304.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 stop
E0528 21:14:42.597968   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 21:15:40.496849   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-869191 stop: exit status 82 (2m0.466979483s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-869191-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-869191 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-869191 status: exit status 3 (18.826137844s)

                                                
                                                
-- stdout --
	multinode-869191
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-869191-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:16:41.850041   42011 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.98:22: connect: no route to host
	E0528 21:16:41.850075   42011 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.98:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-869191 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-869191 -n multinode-869191
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-869191 logs -n 25: (1.4337847s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-869191 cp multinode-869191-m02:/home/docker/cp-test.txt                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191:/home/docker/cp-test_multinode-869191-m02_multinode-869191.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n multinode-869191 sudo cat                                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | /home/docker/cp-test_multinode-869191-m02_multinode-869191.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-869191 cp multinode-869191-m02:/home/docker/cp-test.txt                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03:/home/docker/cp-test_multinode-869191-m02_multinode-869191-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n multinode-869191-m03 sudo cat                                   | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | /home/docker/cp-test_multinode-869191-m02_multinode-869191-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-869191 cp testdata/cp-test.txt                                                | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-869191 cp multinode-869191-m03:/home/docker/cp-test.txt                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2076892289/001/cp-test_multinode-869191-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-869191 cp multinode-869191-m03:/home/docker/cp-test.txt                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191:/home/docker/cp-test_multinode-869191-m03_multinode-869191.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n multinode-869191 sudo cat                                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | /home/docker/cp-test_multinode-869191-m03_multinode-869191.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-869191 cp multinode-869191-m03:/home/docker/cp-test.txt                       | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m02:/home/docker/cp-test_multinode-869191-m03_multinode-869191-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n multinode-869191-m02 sudo cat                                   | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | /home/docker/cp-test_multinode-869191-m03_multinode-869191-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-869191 node stop m03                                                          | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	| node    | multinode-869191 node start                                                             | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:09 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-869191                                                                | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:09 UTC |                     |
	| stop    | -p multinode-869191                                                                     | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:09 UTC |                     |
	| start   | -p multinode-869191                                                                     | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:11 UTC | 28 May 24 21:14 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-869191                                                                | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:14 UTC |                     |
	| node    | multinode-869191 node delete                                                            | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:14 UTC | 28 May 24 21:14 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-869191 stop                                                                   | multinode-869191 | jenkins | v1.33.1 | 28 May 24 21:14 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:11:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:11:17.363297   40275 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:11:17.363540   40275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:11:17.363549   40275 out.go:304] Setting ErrFile to fd 2...
	I0528 21:11:17.363553   40275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:11:17.363726   40275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:11:17.364232   40275 out.go:298] Setting JSON to false
	I0528 21:11:17.365099   40275 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3220,"bootTime":1716927457,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:11:17.365154   40275 start.go:139] virtualization: kvm guest
	I0528 21:11:17.367494   40275 out.go:177] * [multinode-869191] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:11:17.368737   40275 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:11:17.369846   40275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:11:17.368781   40275 notify.go:220] Checking for updates...
	I0528 21:11:17.372252   40275 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:11:17.373537   40275 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:11:17.374820   40275 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:11:17.376066   40275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:11:17.377624   40275 config.go:182] Loaded profile config "multinode-869191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:11:17.377704   40275 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:11:17.378126   40275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:11:17.378165   40275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:11:17.401748   40275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I0528 21:11:17.402192   40275 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:11:17.402761   40275 main.go:141] libmachine: Using API Version  1
	I0528 21:11:17.402779   40275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:11:17.403148   40275 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:11:17.403423   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:11:17.438932   40275 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:11:17.440130   40275 start.go:297] selected driver: kvm2
	I0528 21:11:17.440144   40275 start.go:901] validating driver "kvm2" against &{Name:multinode-869191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-869191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:11:17.440290   40275 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:11:17.440600   40275 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:11:17.440662   40275 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:11:17.455368   40275 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:11:17.455994   40275 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:11:17.456057   40275 cni.go:84] Creating CNI manager for ""
	I0528 21:11:17.456068   40275 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0528 21:11:17.456120   40275 start.go:340] cluster config:
	{Name:multinode-869191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-869191 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:11:17.456235   40275 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:11:17.458842   40275 out.go:177] * Starting "multinode-869191" primary control-plane node in "multinode-869191" cluster
	I0528 21:11:17.460122   40275 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:11:17.460156   40275 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 21:11:17.460166   40275 cache.go:56] Caching tarball of preloaded images
	I0528 21:11:17.460230   40275 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:11:17.460240   40275 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 21:11:17.460355   40275 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/config.json ...
	I0528 21:11:17.460540   40275 start.go:360] acquireMachinesLock for multinode-869191: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:11:17.460580   40275 start.go:364] duration metric: took 22.039µs to acquireMachinesLock for "multinode-869191"
	I0528 21:11:17.460594   40275 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:11:17.460605   40275 fix.go:54] fixHost starting: 
	I0528 21:11:17.460972   40275 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:11:17.461017   40275 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:11:17.475647   40275 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46367
	I0528 21:11:17.476091   40275 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:11:17.476604   40275 main.go:141] libmachine: Using API Version  1
	I0528 21:11:17.476631   40275 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:11:17.476946   40275 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:11:17.477142   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:11:17.477333   40275 main.go:141] libmachine: (multinode-869191) Calling .GetState
	I0528 21:11:17.478925   40275 fix.go:112] recreateIfNeeded on multinode-869191: state=Running err=<nil>
	W0528 21:11:17.478943   40275 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:11:17.481275   40275 out.go:177] * Updating the running kvm2 "multinode-869191" VM ...
	I0528 21:11:17.482552   40275 machine.go:94] provisionDockerMachine start ...
	I0528 21:11:17.482571   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:11:17.482750   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:11:17.485204   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.485722   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:17.485748   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.485896   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:11:17.486067   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:17.486202   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:17.486329   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:11:17.486464   40275 main.go:141] libmachine: Using SSH client type: native
	I0528 21:11:17.486641   40275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0528 21:11:17.486650   40275 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:11:17.603436   40275 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-869191
	
	I0528 21:11:17.603466   40275 main.go:141] libmachine: (multinode-869191) Calling .GetMachineName
	I0528 21:11:17.603711   40275 buildroot.go:166] provisioning hostname "multinode-869191"
	I0528 21:11:17.603739   40275 main.go:141] libmachine: (multinode-869191) Calling .GetMachineName
	I0528 21:11:17.603917   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:11:17.606504   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.606880   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:17.606921   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.607035   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:11:17.607244   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:17.607526   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:17.607690   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:11:17.607851   40275 main.go:141] libmachine: Using SSH client type: native
	I0528 21:11:17.608043   40275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0528 21:11:17.608060   40275 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-869191 && echo "multinode-869191" | sudo tee /etc/hostname
	I0528 21:11:17.738310   40275 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-869191
	
	I0528 21:11:17.738338   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:11:17.741341   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.741744   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:17.741791   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.741898   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:11:17.742088   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:17.742249   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:17.742403   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:11:17.742584   40275 main.go:141] libmachine: Using SSH client type: native
	I0528 21:11:17.742789   40275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0528 21:11:17.742808   40275 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-869191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-869191/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-869191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:11:17.859295   40275 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:11:17.859323   40275 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:11:17.859364   40275 buildroot.go:174] setting up certificates
	I0528 21:11:17.859371   40275 provision.go:84] configureAuth start
	I0528 21:11:17.859379   40275 main.go:141] libmachine: (multinode-869191) Calling .GetMachineName
	I0528 21:11:17.859780   40275 main.go:141] libmachine: (multinode-869191) Calling .GetIP
	I0528 21:11:17.862547   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.862913   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:17.862936   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.863086   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:11:17.865390   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.865823   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:17.865849   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:17.866030   40275 provision.go:143] copyHostCerts
	I0528 21:11:17.866061   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:11:17.866102   40275 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:11:17.866118   40275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:11:17.866192   40275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:11:17.866299   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:11:17.866324   40275 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:11:17.866330   40275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:11:17.866369   40275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:11:17.866427   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:11:17.866450   40275 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:11:17.866467   40275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:11:17.866502   40275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:11:17.866563   40275 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.multinode-869191 san=[127.0.0.1 192.168.39.65 localhost minikube multinode-869191]
	I0528 21:11:18.113588   40275 provision.go:177] copyRemoteCerts
	I0528 21:11:18.113648   40275 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:11:18.113679   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:11:18.116825   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:18.117187   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:18.117215   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:18.117378   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:11:18.117568   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:18.117775   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:11:18.117917   40275 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/multinode-869191/id_rsa Username:docker}
	I0528 21:11:18.205121   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0528 21:11:18.205196   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 21:11:18.231910   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0528 21:11:18.231977   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:11:18.256928   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0528 21:11:18.256999   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0528 21:11:18.281918   40275 provision.go:87] duration metric: took 422.532986ms to configureAuth
	I0528 21:11:18.281957   40275 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:11:18.282194   40275 config.go:182] Loaded profile config "multinode-869191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:11:18.282274   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:11:18.284945   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:18.285317   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:11:18.285344   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:11:18.285519   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:11:18.285727   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:18.285876   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:11:18.286011   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:11:18.286154   40275 main.go:141] libmachine: Using SSH client type: native
	I0528 21:11:18.286313   40275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0528 21:11:18.286327   40275 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:12:49.170890   40275 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:12:49.170921   40275 machine.go:97] duration metric: took 1m31.688354057s to provisionDockerMachine
	I0528 21:12:49.170937   40275 start.go:293] postStartSetup for "multinode-869191" (driver="kvm2")
	I0528 21:12:49.170951   40275 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:12:49.170978   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:12:49.171292   40275 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:12:49.171320   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:12:49.174553   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.175079   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:12:49.175113   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.175359   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:12:49.175561   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:12:49.175761   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:12:49.175943   40275 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/multinode-869191/id_rsa Username:docker}
	I0528 21:12:49.267168   40275 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:12:49.271583   40275 command_runner.go:130] > NAME=Buildroot
	I0528 21:12:49.271604   40275 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0528 21:12:49.271608   40275 command_runner.go:130] > ID=buildroot
	I0528 21:12:49.271614   40275 command_runner.go:130] > VERSION_ID=2023.02.9
	I0528 21:12:49.271618   40275 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0528 21:12:49.271657   40275 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 21:12:49.271670   40275 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:12:49.271723   40275 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:12:49.271803   40275 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:12:49.271814   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /etc/ssl/certs/117602.pem
	I0528 21:12:49.271890   40275 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:12:49.281736   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:12:49.306997   40275 start.go:296] duration metric: took 136.046323ms for postStartSetup
	I0528 21:12:49.307037   40275 fix.go:56] duration metric: took 1m31.846435548s for fixHost
	I0528 21:12:49.307057   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:12:49.309971   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.310367   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:12:49.310390   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.310519   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:12:49.310713   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:12:49.310859   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:12:49.310991   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:12:49.311174   40275 main.go:141] libmachine: Using SSH client type: native
	I0528 21:12:49.311400   40275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0528 21:12:49.311412   40275 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 21:12:49.422793   40275 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716930769.399574046
	
	I0528 21:12:49.422814   40275 fix.go:216] guest clock: 1716930769.399574046
	I0528 21:12:49.422833   40275 fix.go:229] Guest: 2024-05-28 21:12:49.399574046 +0000 UTC Remote: 2024-05-28 21:12:49.307041177 +0000 UTC m=+91.978147062 (delta=92.532869ms)
	I0528 21:12:49.422851   40275 fix.go:200] guest clock delta is within tolerance: 92.532869ms
	I0528 21:12:49.422857   40275 start.go:83] releasing machines lock for "multinode-869191", held for 1m31.962267719s
	I0528 21:12:49.422877   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:12:49.423138   40275 main.go:141] libmachine: (multinode-869191) Calling .GetIP
	I0528 21:12:49.425714   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.426138   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:12:49.426185   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.426282   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:12:49.426826   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:12:49.427036   40275 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:12:49.427136   40275 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:12:49.427185   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:12:49.427249   40275 ssh_runner.go:195] Run: cat /version.json
	I0528 21:12:49.427289   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:12:49.429813   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.430194   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.430286   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:12:49.430330   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.430445   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:12:49.430801   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:12:49.430858   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:12:49.430931   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:12:49.430843   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:49.432195   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:12:49.432209   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:12:49.432412   40275 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:12:49.432428   40275 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/multinode-869191/id_rsa Username:docker}
	I0528 21:12:49.432551   40275 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/multinode-869191/id_rsa Username:docker}
	I0528 21:12:49.540816   40275 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0528 21:12:49.540861   40275 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0528 21:12:49.540992   40275 ssh_runner.go:195] Run: systemctl --version
	I0528 21:12:49.546981   40275 command_runner.go:130] > systemd 252 (252)
	I0528 21:12:49.547021   40275 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0528 21:12:49.547304   40275 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:12:49.713825   40275 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 21:12:49.720038   40275 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0528 21:12:49.720306   40275 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:12:49.720382   40275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:12:49.729775   40275 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0528 21:12:49.729799   40275 start.go:494] detecting cgroup driver to use...
	I0528 21:12:49.729857   40275 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:12:49.745749   40275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:12:49.759251   40275 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:12:49.759306   40275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:12:49.772497   40275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:12:49.785592   40275 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:12:49.929155   40275 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:12:50.073127   40275 docker.go:233] disabling docker service ...
	I0528 21:12:50.073221   40275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:12:50.090853   40275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:12:50.104620   40275 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:12:50.244043   40275 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:12:50.389900   40275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:12:50.404235   40275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:12:50.423682   40275 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0528 21:12:50.424150   40275 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 21:12:50.424225   40275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.434681   40275 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:12:50.434735   40275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.445096   40275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.455436   40275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.465825   40275 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:12:50.476902   40275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.488300   40275 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.500561   40275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:12:50.512094   40275 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:12:50.522088   40275 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0528 21:12:50.522399   40275 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:12:50.532649   40275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:12:50.673783   40275 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:12:51.520421   40275 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:12:51.520490   40275 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:12:51.525769   40275 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0528 21:12:51.525796   40275 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0528 21:12:51.525805   40275 command_runner.go:130] > Device: 0,22	Inode: 1340        Links: 1
	I0528 21:12:51.525814   40275 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0528 21:12:51.525821   40275 command_runner.go:130] > Access: 2024-05-28 21:12:51.391295034 +0000
	I0528 21:12:51.525830   40275 command_runner.go:130] > Modify: 2024-05-28 21:12:51.391295034 +0000
	I0528 21:12:51.525838   40275 command_runner.go:130] > Change: 2024-05-28 21:12:51.391295034 +0000
	I0528 21:12:51.525847   40275 command_runner.go:130] >  Birth: -
	I0528 21:12:51.526100   40275 start.go:562] Will wait 60s for crictl version
	I0528 21:12:51.526154   40275 ssh_runner.go:195] Run: which crictl
	I0528 21:12:51.530069   40275 command_runner.go:130] > /usr/bin/crictl
	I0528 21:12:51.530183   40275 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:12:51.572037   40275 command_runner.go:130] > Version:  0.1.0
	I0528 21:12:51.572060   40275 command_runner.go:130] > RuntimeName:  cri-o
	I0528 21:12:51.572068   40275 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0528 21:12:51.572076   40275 command_runner.go:130] > RuntimeApiVersion:  v1
	I0528 21:12:51.572099   40275 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 21:12:51.572165   40275 ssh_runner.go:195] Run: crio --version
	I0528 21:12:51.604895   40275 command_runner.go:130] > crio version 1.29.1
	I0528 21:12:51.604921   40275 command_runner.go:130] > Version:        1.29.1
	I0528 21:12:51.604971   40275 command_runner.go:130] > GitCommit:      unknown
	I0528 21:12:51.604995   40275 command_runner.go:130] > GitCommitDate:  unknown
	I0528 21:12:51.605002   40275 command_runner.go:130] > GitTreeState:   clean
	I0528 21:12:51.605013   40275 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0528 21:12:51.605021   40275 command_runner.go:130] > GoVersion:      go1.21.6
	I0528 21:12:51.605026   40275 command_runner.go:130] > Compiler:       gc
	I0528 21:12:51.605032   40275 command_runner.go:130] > Platform:       linux/amd64
	I0528 21:12:51.605037   40275 command_runner.go:130] > Linkmode:       dynamic
	I0528 21:12:51.605046   40275 command_runner.go:130] > BuildTags:      
	I0528 21:12:51.605050   40275 command_runner.go:130] >   containers_image_ostree_stub
	I0528 21:12:51.605055   40275 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0528 21:12:51.605059   40275 command_runner.go:130] >   btrfs_noversion
	I0528 21:12:51.605064   40275 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0528 21:12:51.605072   40275 command_runner.go:130] >   libdm_no_deferred_remove
	I0528 21:12:51.605079   40275 command_runner.go:130] >   seccomp
	I0528 21:12:51.605093   40275 command_runner.go:130] > LDFlags:          unknown
	I0528 21:12:51.605100   40275 command_runner.go:130] > SeccompEnabled:   true
	I0528 21:12:51.605106   40275 command_runner.go:130] > AppArmorEnabled:  false
	I0528 21:12:51.605173   40275 ssh_runner.go:195] Run: crio --version
	I0528 21:12:51.637205   40275 command_runner.go:130] > crio version 1.29.1
	I0528 21:12:51.637237   40275 command_runner.go:130] > Version:        1.29.1
	I0528 21:12:51.637247   40275 command_runner.go:130] > GitCommit:      unknown
	I0528 21:12:51.637254   40275 command_runner.go:130] > GitCommitDate:  unknown
	I0528 21:12:51.637263   40275 command_runner.go:130] > GitTreeState:   clean
	I0528 21:12:51.637270   40275 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0528 21:12:51.637274   40275 command_runner.go:130] > GoVersion:      go1.21.6
	I0528 21:12:51.637278   40275 command_runner.go:130] > Compiler:       gc
	I0528 21:12:51.637282   40275 command_runner.go:130] > Platform:       linux/amd64
	I0528 21:12:51.637287   40275 command_runner.go:130] > Linkmode:       dynamic
	I0528 21:12:51.637291   40275 command_runner.go:130] > BuildTags:      
	I0528 21:12:51.637295   40275 command_runner.go:130] >   containers_image_ostree_stub
	I0528 21:12:51.637299   40275 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0528 21:12:51.637302   40275 command_runner.go:130] >   btrfs_noversion
	I0528 21:12:51.637306   40275 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0528 21:12:51.637310   40275 command_runner.go:130] >   libdm_no_deferred_remove
	I0528 21:12:51.637314   40275 command_runner.go:130] >   seccomp
	I0528 21:12:51.637319   40275 command_runner.go:130] > LDFlags:          unknown
	I0528 21:12:51.637329   40275 command_runner.go:130] > SeccompEnabled:   true
	I0528 21:12:51.637337   40275 command_runner.go:130] > AppArmorEnabled:  false
	I0528 21:12:51.640784   40275 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 21:12:51.642219   40275 main.go:141] libmachine: (multinode-869191) Calling .GetIP
	I0528 21:12:51.644755   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:51.645082   40275 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:12:51.645109   40275 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:12:51.645417   40275 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 21:12:51.650172   40275 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0528 21:12:51.650295   40275 kubeadm.go:877] updating cluster {Name:multinode-869191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-869191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:12:51.650432   40275 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:12:51.650493   40275 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:12:51.692412   40275 command_runner.go:130] > {
	I0528 21:12:51.692433   40275 command_runner.go:130] >   "images": [
	I0528 21:12:51.692439   40275 command_runner.go:130] >     {
	I0528 21:12:51.692450   40275 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0528 21:12:51.692456   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.692465   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0528 21:12:51.692471   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692476   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.692487   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0528 21:12:51.692496   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0528 21:12:51.692502   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692510   40275 command_runner.go:130] >       "size": "65291810",
	I0528 21:12:51.692519   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.692527   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.692539   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.692545   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.692551   40275 command_runner.go:130] >     },
	I0528 21:12:51.692557   40275 command_runner.go:130] >     {
	I0528 21:12:51.692568   40275 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0528 21:12:51.692579   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.692589   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0528 21:12:51.692595   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692602   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.692615   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0528 21:12:51.692631   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0528 21:12:51.692637   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692652   40275 command_runner.go:130] >       "size": "65908273",
	I0528 21:12:51.692662   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.692674   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.692684   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.692691   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.692698   40275 command_runner.go:130] >     },
	I0528 21:12:51.692703   40275 command_runner.go:130] >     {
	I0528 21:12:51.692715   40275 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0528 21:12:51.692725   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.692734   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0528 21:12:51.692743   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692750   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.692763   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0528 21:12:51.692778   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0528 21:12:51.692787   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692795   40275 command_runner.go:130] >       "size": "1363676",
	I0528 21:12:51.692803   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.692811   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.692820   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.692829   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.692838   40275 command_runner.go:130] >     },
	I0528 21:12:51.692844   40275 command_runner.go:130] >     {
	I0528 21:12:51.692858   40275 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0528 21:12:51.692868   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.692880   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0528 21:12:51.692889   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692896   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.692912   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0528 21:12:51.692937   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0528 21:12:51.692945   40275 command_runner.go:130] >       ],
	I0528 21:12:51.692951   40275 command_runner.go:130] >       "size": "31470524",
	I0528 21:12:51.692957   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.692963   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.692970   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.692976   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.692980   40275 command_runner.go:130] >     },
	I0528 21:12:51.692994   40275 command_runner.go:130] >     {
	I0528 21:12:51.693008   40275 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0528 21:12:51.693018   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.693031   40275 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0528 21:12:51.693054   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693067   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.693080   40275 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0528 21:12:51.693096   40275 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0528 21:12:51.693104   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693112   40275 command_runner.go:130] >       "size": "61245718",
	I0528 21:12:51.693122   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.693131   40275 command_runner.go:130] >       "username": "nonroot",
	I0528 21:12:51.693141   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.693150   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.693156   40275 command_runner.go:130] >     },
	I0528 21:12:51.693164   40275 command_runner.go:130] >     {
	I0528 21:12:51.693175   40275 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0528 21:12:51.693185   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.693193   40275 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0528 21:12:51.693202   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693210   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.693232   40275 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0528 21:12:51.693247   40275 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0528 21:12:51.693255   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693263   40275 command_runner.go:130] >       "size": "150779692",
	I0528 21:12:51.693272   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.693279   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.693288   40275 command_runner.go:130] >       },
	I0528 21:12:51.693296   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.693305   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.693313   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.693322   40275 command_runner.go:130] >     },
	I0528 21:12:51.693328   40275 command_runner.go:130] >     {
	I0528 21:12:51.693341   40275 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0528 21:12:51.693351   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.693363   40275 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0528 21:12:51.693378   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693388   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.693401   40275 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0528 21:12:51.693416   40275 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0528 21:12:51.693426   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693437   40275 command_runner.go:130] >       "size": "117601759",
	I0528 21:12:51.693444   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.693450   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.693459   40275 command_runner.go:130] >       },
	I0528 21:12:51.693466   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.693476   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.693484   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.693492   40275 command_runner.go:130] >     },
	I0528 21:12:51.693499   40275 command_runner.go:130] >     {
	I0528 21:12:51.693512   40275 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0528 21:12:51.693520   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.693530   40275 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0528 21:12:51.693539   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693546   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.693600   40275 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0528 21:12:51.693617   40275 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0528 21:12:51.693623   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693632   40275 command_runner.go:130] >       "size": "112170310",
	I0528 21:12:51.693641   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.693648   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.693657   40275 command_runner.go:130] >       },
	I0528 21:12:51.693664   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.693670   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.693675   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.693679   40275 command_runner.go:130] >     },
	I0528 21:12:51.693684   40275 command_runner.go:130] >     {
	I0528 21:12:51.693695   40275 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0528 21:12:51.693702   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.693711   40275 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0528 21:12:51.693717   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693727   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.693750   40275 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0528 21:12:51.693780   40275 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0528 21:12:51.693790   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693798   40275 command_runner.go:130] >       "size": "85933465",
	I0528 21:12:51.693825   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.693836   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.693844   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.693853   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.693859   40275 command_runner.go:130] >     },
	I0528 21:12:51.693868   40275 command_runner.go:130] >     {
	I0528 21:12:51.693882   40275 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0528 21:12:51.693892   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.693902   40275 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0528 21:12:51.693911   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693919   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.693935   40275 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0528 21:12:51.693951   40275 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0528 21:12:51.693959   40275 command_runner.go:130] >       ],
	I0528 21:12:51.693967   40275 command_runner.go:130] >       "size": "63026504",
	I0528 21:12:51.693976   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.693984   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.693992   40275 command_runner.go:130] >       },
	I0528 21:12:51.693999   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.694008   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.694015   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.694024   40275 command_runner.go:130] >     },
	I0528 21:12:51.694032   40275 command_runner.go:130] >     {
	I0528 21:12:51.694044   40275 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0528 21:12:51.694053   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.694061   40275 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0528 21:12:51.694070   40275 command_runner.go:130] >       ],
	I0528 21:12:51.694076   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.694089   40275 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0528 21:12:51.694104   40275 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0528 21:12:51.694113   40275 command_runner.go:130] >       ],
	I0528 21:12:51.694123   40275 command_runner.go:130] >       "size": "750414",
	I0528 21:12:51.694139   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.694150   40275 command_runner.go:130] >         "value": "65535"
	I0528 21:12:51.694159   40275 command_runner.go:130] >       },
	I0528 21:12:51.694166   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.694175   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.694181   40275 command_runner.go:130] >       "pinned": true
	I0528 21:12:51.694187   40275 command_runner.go:130] >     }
	I0528 21:12:51.694193   40275 command_runner.go:130] >   ]
	I0528 21:12:51.694198   40275 command_runner.go:130] > }
	I0528 21:12:51.694388   40275 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:12:51.694400   40275 crio.go:433] Images already preloaded, skipping extraction
	I0528 21:12:51.694453   40275 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:12:51.729165   40275 command_runner.go:130] > {
	I0528 21:12:51.729188   40275 command_runner.go:130] >   "images": [
	I0528 21:12:51.729195   40275 command_runner.go:130] >     {
	I0528 21:12:51.729205   40275 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0528 21:12:51.729211   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.729219   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0528 21:12:51.729231   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729237   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.729251   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0528 21:12:51.729265   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0528 21:12:51.729271   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729282   40275 command_runner.go:130] >       "size": "65291810",
	I0528 21:12:51.729292   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.729299   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.729308   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.729316   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.729325   40275 command_runner.go:130] >     },
	I0528 21:12:51.729331   40275 command_runner.go:130] >     {
	I0528 21:12:51.729345   40275 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0528 21:12:51.729352   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.729361   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0528 21:12:51.729368   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729376   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.729389   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0528 21:12:51.729404   40275 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0528 21:12:51.729413   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729424   40275 command_runner.go:130] >       "size": "65908273",
	I0528 21:12:51.729432   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.729445   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.729454   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.729461   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.729470   40275 command_runner.go:130] >     },
	I0528 21:12:51.729476   40275 command_runner.go:130] >     {
	I0528 21:12:51.729489   40275 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0528 21:12:51.729499   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.729509   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0528 21:12:51.729519   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729528   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.729544   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0528 21:12:51.729559   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0528 21:12:51.729568   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729575   40275 command_runner.go:130] >       "size": "1363676",
	I0528 21:12:51.729584   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.729590   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.729598   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.729608   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.729615   40275 command_runner.go:130] >     },
	I0528 21:12:51.729626   40275 command_runner.go:130] >     {
	I0528 21:12:51.729637   40275 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0528 21:12:51.729646   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.729655   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0528 21:12:51.729664   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729671   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.729687   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0528 21:12:51.729707   40275 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0528 21:12:51.729716   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729724   40275 command_runner.go:130] >       "size": "31470524",
	I0528 21:12:51.729733   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.729740   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.729749   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.729756   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.729780   40275 command_runner.go:130] >     },
	I0528 21:12:51.729787   40275 command_runner.go:130] >     {
	I0528 21:12:51.729801   40275 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0528 21:12:51.729811   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.729821   40275 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0528 21:12:51.729830   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729838   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.729854   40275 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0528 21:12:51.729869   40275 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0528 21:12:51.729879   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729886   40275 command_runner.go:130] >       "size": "61245718",
	I0528 21:12:51.729896   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.729907   40275 command_runner.go:130] >       "username": "nonroot",
	I0528 21:12:51.729914   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.729922   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.729928   40275 command_runner.go:130] >     },
	I0528 21:12:51.729937   40275 command_runner.go:130] >     {
	I0528 21:12:51.729947   40275 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0528 21:12:51.729957   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.729966   40275 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0528 21:12:51.729974   40275 command_runner.go:130] >       ],
	I0528 21:12:51.729981   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.729997   40275 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0528 21:12:51.730012   40275 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0528 21:12:51.730021   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730029   40275 command_runner.go:130] >       "size": "150779692",
	I0528 21:12:51.730038   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.730046   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.730055   40275 command_runner.go:130] >       },
	I0528 21:12:51.730062   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.730074   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.730084   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.730094   40275 command_runner.go:130] >     },
	I0528 21:12:51.730100   40275 command_runner.go:130] >     {
	I0528 21:12:51.730111   40275 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0528 21:12:51.730120   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.730129   40275 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0528 21:12:51.730138   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730146   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.730162   40275 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0528 21:12:51.730178   40275 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0528 21:12:51.730187   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730196   40275 command_runner.go:130] >       "size": "117601759",
	I0528 21:12:51.730206   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.730213   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.730222   40275 command_runner.go:130] >       },
	I0528 21:12:51.730240   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.730250   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.730257   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.730267   40275 command_runner.go:130] >     },
	I0528 21:12:51.730275   40275 command_runner.go:130] >     {
	I0528 21:12:51.730289   40275 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0528 21:12:51.730297   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.730309   40275 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0528 21:12:51.730318   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730326   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.730348   40275 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0528 21:12:51.730363   40275 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0528 21:12:51.730373   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730380   40275 command_runner.go:130] >       "size": "112170310",
	I0528 21:12:51.730389   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.730397   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.730405   40275 command_runner.go:130] >       },
	I0528 21:12:51.730413   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.730423   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.730432   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.730438   40275 command_runner.go:130] >     },
	I0528 21:12:51.730445   40275 command_runner.go:130] >     {
	I0528 21:12:51.730458   40275 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0528 21:12:51.730466   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.730478   40275 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0528 21:12:51.730486   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730494   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.730509   40275 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0528 21:12:51.730525   40275 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0528 21:12:51.730534   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730541   40275 command_runner.go:130] >       "size": "85933465",
	I0528 21:12:51.730548   40275 command_runner.go:130] >       "uid": null,
	I0528 21:12:51.730559   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.730568   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.730578   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.730587   40275 command_runner.go:130] >     },
	I0528 21:12:51.730593   40275 command_runner.go:130] >     {
	I0528 21:12:51.730607   40275 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0528 21:12:51.730616   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.730628   40275 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0528 21:12:51.730637   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730645   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.730661   40275 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0528 21:12:51.730676   40275 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0528 21:12:51.730685   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730693   40275 command_runner.go:130] >       "size": "63026504",
	I0528 21:12:51.730703   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.730711   40275 command_runner.go:130] >         "value": "0"
	I0528 21:12:51.730722   40275 command_runner.go:130] >       },
	I0528 21:12:51.730732   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.730739   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.730749   40275 command_runner.go:130] >       "pinned": false
	I0528 21:12:51.730757   40275 command_runner.go:130] >     },
	I0528 21:12:51.730765   40275 command_runner.go:130] >     {
	I0528 21:12:51.730777   40275 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0528 21:12:51.730786   40275 command_runner.go:130] >       "repoTags": [
	I0528 21:12:51.730795   40275 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0528 21:12:51.730804   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730811   40275 command_runner.go:130] >       "repoDigests": [
	I0528 21:12:51.730827   40275 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0528 21:12:51.730842   40275 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0528 21:12:51.730851   40275 command_runner.go:130] >       ],
	I0528 21:12:51.730858   40275 command_runner.go:130] >       "size": "750414",
	I0528 21:12:51.730869   40275 command_runner.go:130] >       "uid": {
	I0528 21:12:51.730879   40275 command_runner.go:130] >         "value": "65535"
	I0528 21:12:51.730885   40275 command_runner.go:130] >       },
	I0528 21:12:51.730895   40275 command_runner.go:130] >       "username": "",
	I0528 21:12:51.730904   40275 command_runner.go:130] >       "spec": null,
	I0528 21:12:51.730911   40275 command_runner.go:130] >       "pinned": true
	I0528 21:12:51.730920   40275 command_runner.go:130] >     }
	I0528 21:12:51.730926   40275 command_runner.go:130] >   ]
	I0528 21:12:51.730934   40275 command_runner.go:130] > }
	I0528 21:12:51.731055   40275 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:12:51.731068   40275 cache_images.go:84] Images are preloaded, skipping loading
	I0528 21:12:51.731077   40275 kubeadm.go:928] updating node { 192.168.39.65 8443 v1.30.1 crio true true} ...
	I0528 21:12:51.731185   40275 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-869191 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-869191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:12:51.731269   40275 ssh_runner.go:195] Run: crio config
	I0528 21:12:51.780871   40275 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0528 21:12:51.780901   40275 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0528 21:12:51.780912   40275 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0528 21:12:51.780917   40275 command_runner.go:130] > #
	I0528 21:12:51.780929   40275 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0528 21:12:51.780936   40275 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0528 21:12:51.780944   40275 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0528 21:12:51.780955   40275 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0528 21:12:51.780965   40275 command_runner.go:130] > # reload'.
	I0528 21:12:51.780985   40275 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0528 21:12:51.780998   40275 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0528 21:12:51.781015   40275 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0528 21:12:51.781022   40275 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0528 21:12:51.781028   40275 command_runner.go:130] > [crio]
	I0528 21:12:51.781036   40275 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0528 21:12:51.781045   40275 command_runner.go:130] > # containers images, in this directory.
	I0528 21:12:51.781054   40275 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0528 21:12:51.781077   40275 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0528 21:12:51.781141   40275 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0528 21:12:51.781167   40275 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0528 21:12:51.781380   40275 command_runner.go:130] > # imagestore = ""
	I0528 21:12:51.781394   40275 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0528 21:12:51.781401   40275 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0528 21:12:51.781562   40275 command_runner.go:130] > storage_driver = "overlay"
	I0528 21:12:51.781579   40275 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0528 21:12:51.781588   40275 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0528 21:12:51.781597   40275 command_runner.go:130] > storage_option = [
	I0528 21:12:51.781771   40275 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0528 21:12:51.781812   40275 command_runner.go:130] > ]
	I0528 21:12:51.781831   40275 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0528 21:12:51.781845   40275 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0528 21:12:51.782191   40275 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0528 21:12:51.782213   40275 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0528 21:12:51.782222   40275 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0528 21:12:51.782230   40275 command_runner.go:130] > # always happen on a node reboot
	I0528 21:12:51.782486   40275 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0528 21:12:51.782511   40275 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0528 21:12:51.782523   40275 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0528 21:12:51.782535   40275 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0528 21:12:51.782607   40275 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0528 21:12:51.782630   40275 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0528 21:12:51.782645   40275 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0528 21:12:51.782883   40275 command_runner.go:130] > # internal_wipe = true
	I0528 21:12:51.782897   40275 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0528 21:12:51.782903   40275 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0528 21:12:51.783237   40275 command_runner.go:130] > # internal_repair = false
	I0528 21:12:51.783249   40275 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0528 21:12:51.783257   40275 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0528 21:12:51.783266   40275 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0528 21:12:51.783505   40275 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0528 21:12:51.783515   40275 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0528 21:12:51.783519   40275 command_runner.go:130] > [crio.api]
	I0528 21:12:51.783524   40275 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0528 21:12:51.783740   40275 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0528 21:12:51.783754   40275 command_runner.go:130] > # IP address on which the stream server will listen.
	I0528 21:12:51.783990   40275 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0528 21:12:51.784016   40275 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0528 21:12:51.784025   40275 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0528 21:12:51.784238   40275 command_runner.go:130] > # stream_port = "0"
	I0528 21:12:51.784253   40275 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0528 21:12:51.784676   40275 command_runner.go:130] > # stream_enable_tls = false
	I0528 21:12:51.784694   40275 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0528 21:12:51.784846   40275 command_runner.go:130] > # stream_idle_timeout = ""
	I0528 21:12:51.784860   40275 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0528 21:12:51.784870   40275 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0528 21:12:51.784876   40275 command_runner.go:130] > # minutes.
	I0528 21:12:51.785145   40275 command_runner.go:130] > # stream_tls_cert = ""
	I0528 21:12:51.785161   40275 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0528 21:12:51.785170   40275 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0528 21:12:51.785342   40275 command_runner.go:130] > # stream_tls_key = ""
	I0528 21:12:51.785355   40275 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0528 21:12:51.785367   40275 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0528 21:12:51.785417   40275 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0528 21:12:51.785596   40275 command_runner.go:130] > # stream_tls_ca = ""
	I0528 21:12:51.785607   40275 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0528 21:12:51.785640   40275 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0528 21:12:51.785657   40275 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0528 21:12:51.785782   40275 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0528 21:12:51.785798   40275 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0528 21:12:51.785807   40275 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0528 21:12:51.785817   40275 command_runner.go:130] > [crio.runtime]
	I0528 21:12:51.785825   40275 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0528 21:12:51.785834   40275 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0528 21:12:51.785840   40275 command_runner.go:130] > # "nofile=1024:2048"
	I0528 21:12:51.785849   40275 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0528 21:12:51.785952   40275 command_runner.go:130] > # default_ulimits = [
	I0528 21:12:51.786267   40275 command_runner.go:130] > # ]
	I0528 21:12:51.786284   40275 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0528 21:12:51.786322   40275 command_runner.go:130] > # no_pivot = false
	I0528 21:12:51.786337   40275 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0528 21:12:51.786348   40275 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0528 21:12:51.786444   40275 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0528 21:12:51.786457   40275 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0528 21:12:51.786465   40275 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0528 21:12:51.786483   40275 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0528 21:12:51.786614   40275 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0528 21:12:51.786630   40275 command_runner.go:130] > # Cgroup setting for conmon
	I0528 21:12:51.786641   40275 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0528 21:12:51.786800   40275 command_runner.go:130] > conmon_cgroup = "pod"
	I0528 21:12:51.786816   40275 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0528 21:12:51.786825   40275 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0528 21:12:51.786835   40275 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0528 21:12:51.786844   40275 command_runner.go:130] > conmon_env = [
	I0528 21:12:51.786969   40275 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0528 21:12:51.786998   40275 command_runner.go:130] > ]
	I0528 21:12:51.787011   40275 command_runner.go:130] > # Additional environment variables to set for all the
	I0528 21:12:51.787022   40275 command_runner.go:130] > # containers. These are overridden if set in the
	I0528 21:12:51.787034   40275 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0528 21:12:51.787281   40275 command_runner.go:130] > # default_env = [
	I0528 21:12:51.787413   40275 command_runner.go:130] > # ]
	I0528 21:12:51.787427   40275 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0528 21:12:51.787440   40275 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0528 21:12:51.787703   40275 command_runner.go:130] > # selinux = false
	I0528 21:12:51.787721   40275 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0528 21:12:51.787732   40275 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0528 21:12:51.787741   40275 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0528 21:12:51.788027   40275 command_runner.go:130] > # seccomp_profile = ""
	I0528 21:12:51.788042   40275 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0528 21:12:51.788052   40275 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0528 21:12:51.788062   40275 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0528 21:12:51.788070   40275 command_runner.go:130] > # which might increase security.
	I0528 21:12:51.788081   40275 command_runner.go:130] > # This option is currently deprecated,
	I0528 21:12:51.788095   40275 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0528 21:12:51.788135   40275 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0528 21:12:51.788154   40275 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0528 21:12:51.788165   40275 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0528 21:12:51.788178   40275 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0528 21:12:51.788187   40275 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0528 21:12:51.788199   40275 command_runner.go:130] > # This option supports live configuration reload.
	I0528 21:12:51.788517   40275 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0528 21:12:51.788535   40275 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0528 21:12:51.788543   40275 command_runner.go:130] > # the cgroup blockio controller.
	I0528 21:12:51.789974   40275 command_runner.go:130] > # blockio_config_file = ""
	I0528 21:12:51.789994   40275 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0528 21:12:51.790000   40275 command_runner.go:130] > # blockio parameters.
	I0528 21:12:51.790006   40275 command_runner.go:130] > # blockio_reload = false
	I0528 21:12:51.790018   40275 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0528 21:12:51.790026   40275 command_runner.go:130] > # irqbalance daemon.
	I0528 21:12:51.790035   40275 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0528 21:12:51.790045   40275 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0528 21:12:51.790062   40275 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0528 21:12:51.790074   40275 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0528 21:12:51.790084   40275 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0528 21:12:51.790095   40275 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0528 21:12:51.790106   40275 command_runner.go:130] > # This option supports live configuration reload.
	I0528 21:12:51.790113   40275 command_runner.go:130] > # rdt_config_file = ""
	I0528 21:12:51.790125   40275 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0528 21:12:51.790131   40275 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0528 21:12:51.790165   40275 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0528 21:12:51.790172   40275 command_runner.go:130] > # separate_pull_cgroup = ""
	I0528 21:12:51.790181   40275 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0528 21:12:51.790191   40275 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0528 21:12:51.790196   40275 command_runner.go:130] > # will be added.
	I0528 21:12:51.790202   40275 command_runner.go:130] > # default_capabilities = [
	I0528 21:12:51.790207   40275 command_runner.go:130] > # 	"CHOWN",
	I0528 21:12:51.790215   40275 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0528 21:12:51.790220   40275 command_runner.go:130] > # 	"FSETID",
	I0528 21:12:51.790226   40275 command_runner.go:130] > # 	"FOWNER",
	I0528 21:12:51.790239   40275 command_runner.go:130] > # 	"SETGID",
	I0528 21:12:51.790245   40275 command_runner.go:130] > # 	"SETUID",
	I0528 21:12:51.790250   40275 command_runner.go:130] > # 	"SETPCAP",
	I0528 21:12:51.790258   40275 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0528 21:12:51.790264   40275 command_runner.go:130] > # 	"KILL",
	I0528 21:12:51.790270   40275 command_runner.go:130] > # ]
	I0528 21:12:51.790282   40275 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0528 21:12:51.790295   40275 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0528 21:12:51.790305   40275 command_runner.go:130] > # add_inheritable_capabilities = false
	I0528 21:12:51.790315   40275 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0528 21:12:51.790321   40275 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0528 21:12:51.790325   40275 command_runner.go:130] > default_sysctls = [
	I0528 21:12:51.790330   40275 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0528 21:12:51.790336   40275 command_runner.go:130] > ]
	I0528 21:12:51.790341   40275 command_runner.go:130] > # List of devices on the host that a
	I0528 21:12:51.790347   40275 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0528 21:12:51.790353   40275 command_runner.go:130] > # allowed_devices = [
	I0528 21:12:51.790357   40275 command_runner.go:130] > # 	"/dev/fuse",
	I0528 21:12:51.790360   40275 command_runner.go:130] > # ]
	I0528 21:12:51.790365   40275 command_runner.go:130] > # List of additional devices. specified as
	I0528 21:12:51.790373   40275 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0528 21:12:51.790382   40275 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0528 21:12:51.790388   40275 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0528 21:12:51.790394   40275 command_runner.go:130] > # additional_devices = [
	I0528 21:12:51.790397   40275 command_runner.go:130] > # ]
	I0528 21:12:51.790402   40275 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0528 21:12:51.790408   40275 command_runner.go:130] > # cdi_spec_dirs = [
	I0528 21:12:51.790413   40275 command_runner.go:130] > # 	"/etc/cdi",
	I0528 21:12:51.790417   40275 command_runner.go:130] > # 	"/var/run/cdi",
	I0528 21:12:51.790422   40275 command_runner.go:130] > # ]
	I0528 21:12:51.790429   40275 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0528 21:12:51.790437   40275 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0528 21:12:51.790441   40275 command_runner.go:130] > # Defaults to false.
	I0528 21:12:51.790445   40275 command_runner.go:130] > # device_ownership_from_security_context = false
	I0528 21:12:51.790454   40275 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0528 21:12:51.790459   40275 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0528 21:12:51.790465   40275 command_runner.go:130] > # hooks_dir = [
	I0528 21:12:51.790470   40275 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0528 21:12:51.790475   40275 command_runner.go:130] > # ]
	I0528 21:12:51.790481   40275 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0528 21:12:51.790489   40275 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0528 21:12:51.790494   40275 command_runner.go:130] > # its default mounts from the following two files:
	I0528 21:12:51.790499   40275 command_runner.go:130] > #
	I0528 21:12:51.790504   40275 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0528 21:12:51.790512   40275 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0528 21:12:51.790517   40275 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0528 21:12:51.790523   40275 command_runner.go:130] > #
	I0528 21:12:51.790528   40275 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0528 21:12:51.790534   40275 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0528 21:12:51.790559   40275 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0528 21:12:51.790570   40275 command_runner.go:130] > #      only add mounts it finds in this file.
	I0528 21:12:51.790573   40275 command_runner.go:130] > #
	I0528 21:12:51.790577   40275 command_runner.go:130] > # default_mounts_file = ""
	I0528 21:12:51.790582   40275 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0528 21:12:51.790592   40275 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0528 21:12:51.790598   40275 command_runner.go:130] > pids_limit = 1024
	I0528 21:12:51.790605   40275 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0528 21:12:51.790613   40275 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0528 21:12:51.790620   40275 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0528 21:12:51.790636   40275 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0528 21:12:51.790642   40275 command_runner.go:130] > # log_size_max = -1
	I0528 21:12:51.790649   40275 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0528 21:12:51.790653   40275 command_runner.go:130] > # log_to_journald = false
	I0528 21:12:51.790659   40275 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0528 21:12:51.790663   40275 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0528 21:12:51.790668   40275 command_runner.go:130] > # Path to directory for container attach sockets.
	I0528 21:12:51.790673   40275 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0528 21:12:51.790678   40275 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0528 21:12:51.790683   40275 command_runner.go:130] > # bind_mount_prefix = ""
	I0528 21:12:51.790688   40275 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0528 21:12:51.790693   40275 command_runner.go:130] > # read_only = false
	I0528 21:12:51.790698   40275 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0528 21:12:51.790704   40275 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0528 21:12:51.790710   40275 command_runner.go:130] > # live configuration reload.
	I0528 21:12:51.790714   40275 command_runner.go:130] > # log_level = "info"
	I0528 21:12:51.790720   40275 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0528 21:12:51.790726   40275 command_runner.go:130] > # This option supports live configuration reload.
	I0528 21:12:51.790729   40275 command_runner.go:130] > # log_filter = ""
	I0528 21:12:51.790735   40275 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0528 21:12:51.790741   40275 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0528 21:12:51.790746   40275 command_runner.go:130] > # separated by comma.
	I0528 21:12:51.790753   40275 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0528 21:12:51.790760   40275 command_runner.go:130] > # uid_mappings = ""
	I0528 21:12:51.790765   40275 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0528 21:12:51.790771   40275 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0528 21:12:51.790775   40275 command_runner.go:130] > # separated by comma.
	I0528 21:12:51.790787   40275 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0528 21:12:51.790794   40275 command_runner.go:130] > # gid_mappings = ""
	I0528 21:12:51.790800   40275 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0528 21:12:51.790809   40275 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0528 21:12:51.790815   40275 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0528 21:12:51.790825   40275 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0528 21:12:51.790829   40275 command_runner.go:130] > # minimum_mappable_uid = -1
	I0528 21:12:51.790835   40275 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0528 21:12:51.790842   40275 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0528 21:12:51.790848   40275 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0528 21:12:51.790857   40275 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0528 21:12:51.790861   40275 command_runner.go:130] > # minimum_mappable_gid = -1
	I0528 21:12:51.790866   40275 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0528 21:12:51.790874   40275 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0528 21:12:51.790879   40275 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0528 21:12:51.790885   40275 command_runner.go:130] > # ctr_stop_timeout = 30
	I0528 21:12:51.790891   40275 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0528 21:12:51.790896   40275 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0528 21:12:51.790900   40275 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0528 21:12:51.790905   40275 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0528 21:12:51.790909   40275 command_runner.go:130] > drop_infra_ctr = false
	I0528 21:12:51.790914   40275 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0528 21:12:51.790919   40275 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0528 21:12:51.790926   40275 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0528 21:12:51.790929   40275 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0528 21:12:51.790935   40275 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0528 21:12:51.790941   40275 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0528 21:12:51.790948   40275 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0528 21:12:51.790953   40275 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0528 21:12:51.790957   40275 command_runner.go:130] > # shared_cpuset = ""
	I0528 21:12:51.790962   40275 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0528 21:12:51.790967   40275 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0528 21:12:51.790973   40275 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0528 21:12:51.790980   40275 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0528 21:12:51.790987   40275 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0528 21:12:51.790993   40275 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0528 21:12:51.791000   40275 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0528 21:12:51.791006   40275 command_runner.go:130] > # enable_criu_support = false
	I0528 21:12:51.791011   40275 command_runner.go:130] > # Enable/disable the generation of the container,
	I0528 21:12:51.791018   40275 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0528 21:12:51.791022   40275 command_runner.go:130] > # enable_pod_events = false
	I0528 21:12:51.791028   40275 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0528 21:12:51.791034   40275 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0528 21:12:51.791039   40275 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0528 21:12:51.791045   40275 command_runner.go:130] > # default_runtime = "runc"
	I0528 21:12:51.791051   40275 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0528 21:12:51.791062   40275 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0528 21:12:51.791075   40275 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0528 21:12:51.791087   40275 command_runner.go:130] > # creation as a file is not desired either.
	I0528 21:12:51.791102   40275 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0528 21:12:51.791116   40275 command_runner.go:130] > # the hostname is being managed dynamically.
	I0528 21:12:51.791126   40275 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0528 21:12:51.791132   40275 command_runner.go:130] > # ]
	I0528 21:12:51.791145   40275 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0528 21:12:51.791157   40275 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0528 21:12:51.791170   40275 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0528 21:12:51.791181   40275 command_runner.go:130] > # Each entry in the table should follow the format:
	I0528 21:12:51.791189   40275 command_runner.go:130] > #
	I0528 21:12:51.791197   40275 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0528 21:12:51.791205   40275 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0528 21:12:51.791233   40275 command_runner.go:130] > # runtime_type = "oci"
	I0528 21:12:51.791244   40275 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0528 21:12:51.791251   40275 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0528 21:12:51.791261   40275 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0528 21:12:51.791271   40275 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0528 21:12:51.791277   40275 command_runner.go:130] > # monitor_env = []
	I0528 21:12:51.791286   40275 command_runner.go:130] > # privileged_without_host_devices = false
	I0528 21:12:51.791295   40275 command_runner.go:130] > # allowed_annotations = []
	I0528 21:12:51.791305   40275 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0528 21:12:51.791314   40275 command_runner.go:130] > # Where:
	I0528 21:12:51.791322   40275 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0528 21:12:51.791335   40275 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0528 21:12:51.791347   40275 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0528 21:12:51.791358   40275 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0528 21:12:51.791364   40275 command_runner.go:130] > #   in $PATH.
	I0528 21:12:51.791376   40275 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0528 21:12:51.791387   40275 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0528 21:12:51.791397   40275 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0528 21:12:51.791407   40275 command_runner.go:130] > #   state.
	I0528 21:12:51.791424   40275 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0528 21:12:51.791431   40275 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0528 21:12:51.791437   40275 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0528 21:12:51.791443   40275 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0528 21:12:51.791448   40275 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0528 21:12:51.791454   40275 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0528 21:12:51.791459   40275 command_runner.go:130] > #   The currently recognized values are:
	I0528 21:12:51.791465   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0528 21:12:51.791478   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0528 21:12:51.791484   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0528 21:12:51.791492   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0528 21:12:51.791499   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0528 21:12:51.791505   40275 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0528 21:12:51.791514   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0528 21:12:51.791520   40275 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0528 21:12:51.791529   40275 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0528 21:12:51.791535   40275 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0528 21:12:51.791541   40275 command_runner.go:130] > #   deprecated option "conmon".
	I0528 21:12:51.791548   40275 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0528 21:12:51.791555   40275 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0528 21:12:51.791561   40275 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0528 21:12:51.791569   40275 command_runner.go:130] > #   should be moved to the container's cgroup
	I0528 21:12:51.791575   40275 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0528 21:12:51.791581   40275 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0528 21:12:51.791592   40275 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0528 21:12:51.791604   40275 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0528 21:12:51.791612   40275 command_runner.go:130] > #
	I0528 21:12:51.791623   40275 command_runner.go:130] > # Using the seccomp notifier feature:
	I0528 21:12:51.791630   40275 command_runner.go:130] > #
	I0528 21:12:51.791641   40275 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0528 21:12:51.791654   40275 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0528 21:12:51.791660   40275 command_runner.go:130] > #
	I0528 21:12:51.791670   40275 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0528 21:12:51.791683   40275 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0528 21:12:51.791688   40275 command_runner.go:130] > #
	I0528 21:12:51.791695   40275 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0528 21:12:51.791702   40275 command_runner.go:130] > # feature.
	I0528 21:12:51.791710   40275 command_runner.go:130] > #
	I0528 21:12:51.791718   40275 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0528 21:12:51.791727   40275 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0528 21:12:51.791733   40275 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0528 21:12:51.791742   40275 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0528 21:12:51.791748   40275 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0528 21:12:51.791753   40275 command_runner.go:130] > #
	I0528 21:12:51.791759   40275 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0528 21:12:51.791767   40275 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0528 21:12:51.791771   40275 command_runner.go:130] > #
	I0528 21:12:51.791776   40275 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0528 21:12:51.791785   40275 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0528 21:12:51.791790   40275 command_runner.go:130] > #
	I0528 21:12:51.791796   40275 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0528 21:12:51.791804   40275 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0528 21:12:51.791808   40275 command_runner.go:130] > # limitation.
	I0528 21:12:51.791820   40275 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0528 21:12:51.791827   40275 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0528 21:12:51.791842   40275 command_runner.go:130] > runtime_type = "oci"
	I0528 21:12:51.791851   40275 command_runner.go:130] > runtime_root = "/run/runc"
	I0528 21:12:51.791856   40275 command_runner.go:130] > runtime_config_path = ""
	I0528 21:12:51.791861   40275 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0528 21:12:51.791867   40275 command_runner.go:130] > monitor_cgroup = "pod"
	I0528 21:12:51.791871   40275 command_runner.go:130] > monitor_exec_cgroup = ""
	I0528 21:12:51.791875   40275 command_runner.go:130] > monitor_env = [
	I0528 21:12:51.791881   40275 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0528 21:12:51.791887   40275 command_runner.go:130] > ]
	I0528 21:12:51.791892   40275 command_runner.go:130] > privileged_without_host_devices = false
	I0528 21:12:51.791902   40275 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0528 21:12:51.791909   40275 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0528 21:12:51.791915   40275 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0528 21:12:51.791923   40275 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0528 21:12:51.791933   40275 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0528 21:12:51.791939   40275 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0528 21:12:51.791950   40275 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0528 21:12:51.791960   40275 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0528 21:12:51.791966   40275 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0528 21:12:51.791975   40275 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0528 21:12:51.791979   40275 command_runner.go:130] > # Example:
	I0528 21:12:51.791985   40275 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0528 21:12:51.791989   40275 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0528 21:12:51.791997   40275 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0528 21:12:51.792002   40275 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0528 21:12:51.792005   40275 command_runner.go:130] > # cpuset = 0
	I0528 21:12:51.792009   40275 command_runner.go:130] > # cpushares = "0-1"
	I0528 21:12:51.792012   40275 command_runner.go:130] > # Where:
	I0528 21:12:51.792016   40275 command_runner.go:130] > # The workload name is workload-type.
	I0528 21:12:51.792023   40275 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0528 21:12:51.792028   40275 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0528 21:12:51.792034   40275 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0528 21:12:51.792041   40275 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0528 21:12:51.792046   40275 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0528 21:12:51.792050   40275 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0528 21:12:51.792056   40275 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0528 21:12:51.792059   40275 command_runner.go:130] > # Default value is set to true
	I0528 21:12:51.792063   40275 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0528 21:12:51.792068   40275 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0528 21:12:51.792074   40275 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0528 21:12:51.792081   40275 command_runner.go:130] > # Default value is set to 'false'
	I0528 21:12:51.792087   40275 command_runner.go:130] > # disable_hostport_mapping = false
	I0528 21:12:51.792097   40275 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0528 21:12:51.792102   40275 command_runner.go:130] > #
	I0528 21:12:51.792110   40275 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0528 21:12:51.792119   40275 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0528 21:12:51.792131   40275 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0528 21:12:51.792142   40275 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0528 21:12:51.792151   40275 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0528 21:12:51.792155   40275 command_runner.go:130] > [crio.image]
	I0528 21:12:51.792165   40275 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0528 21:12:51.792171   40275 command_runner.go:130] > # default_transport = "docker://"
	I0528 21:12:51.792180   40275 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0528 21:12:51.792186   40275 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0528 21:12:51.792190   40275 command_runner.go:130] > # global_auth_file = ""
	I0528 21:12:51.792195   40275 command_runner.go:130] > # The image used to instantiate infra containers.
	I0528 21:12:51.792200   40275 command_runner.go:130] > # This option supports live configuration reload.
	I0528 21:12:51.792204   40275 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0528 21:12:51.792210   40275 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0528 21:12:51.792218   40275 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0528 21:12:51.792222   40275 command_runner.go:130] > # This option supports live configuration reload.
	I0528 21:12:51.792233   40275 command_runner.go:130] > # pause_image_auth_file = ""
	I0528 21:12:51.792240   40275 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0528 21:12:51.792248   40275 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0528 21:12:51.792254   40275 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0528 21:12:51.792264   40275 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0528 21:12:51.792269   40275 command_runner.go:130] > # pause_command = "/pause"
	I0528 21:12:51.792275   40275 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0528 21:12:51.792281   40275 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0528 21:12:51.792289   40275 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0528 21:12:51.792295   40275 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0528 21:12:51.792302   40275 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0528 21:12:51.792307   40275 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0528 21:12:51.792313   40275 command_runner.go:130] > # pinned_images = [
	I0528 21:12:51.792317   40275 command_runner.go:130] > # ]
	I0528 21:12:51.792325   40275 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0528 21:12:51.792330   40275 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0528 21:12:51.792336   40275 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0528 21:12:51.792342   40275 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0528 21:12:51.792347   40275 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0528 21:12:51.792351   40275 command_runner.go:130] > # signature_policy = ""
	I0528 21:12:51.792358   40275 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0528 21:12:51.792365   40275 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0528 21:12:51.792373   40275 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0528 21:12:51.792379   40275 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0528 21:12:51.792387   40275 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0528 21:12:51.792392   40275 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0528 21:12:51.792400   40275 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0528 21:12:51.792405   40275 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0528 21:12:51.792412   40275 command_runner.go:130] > # changing them here.
	I0528 21:12:51.792416   40275 command_runner.go:130] > # insecure_registries = [
	I0528 21:12:51.792418   40275 command_runner.go:130] > # ]
	I0528 21:12:51.792424   40275 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0528 21:12:51.792431   40275 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0528 21:12:51.792435   40275 command_runner.go:130] > # image_volumes = "mkdir"
	I0528 21:12:51.792443   40275 command_runner.go:130] > # Temporary directory to use for storing big files
	I0528 21:12:51.792447   40275 command_runner.go:130] > # big_files_temporary_dir = ""
	I0528 21:12:51.792452   40275 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0528 21:12:51.792458   40275 command_runner.go:130] > # CNI plugins.
	I0528 21:12:51.792462   40275 command_runner.go:130] > [crio.network]
	I0528 21:12:51.792467   40275 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0528 21:12:51.792473   40275 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0528 21:12:51.792476   40275 command_runner.go:130] > # cni_default_network = ""
	I0528 21:12:51.792481   40275 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0528 21:12:51.792486   40275 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0528 21:12:51.792491   40275 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0528 21:12:51.792497   40275 command_runner.go:130] > # plugin_dirs = [
	I0528 21:12:51.792501   40275 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0528 21:12:51.792504   40275 command_runner.go:130] > # ]
	I0528 21:12:51.792510   40275 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0528 21:12:51.792515   40275 command_runner.go:130] > [crio.metrics]
	I0528 21:12:51.792520   40275 command_runner.go:130] > # Globally enable or disable metrics support.
	I0528 21:12:51.792524   40275 command_runner.go:130] > enable_metrics = true
	I0528 21:12:51.792528   40275 command_runner.go:130] > # Specify enabled metrics collectors.
	I0528 21:12:51.792534   40275 command_runner.go:130] > # Per default all metrics are enabled.
	I0528 21:12:51.792540   40275 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0528 21:12:51.792548   40275 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0528 21:12:51.792553   40275 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0528 21:12:51.792560   40275 command_runner.go:130] > # metrics_collectors = [
	I0528 21:12:51.792564   40275 command_runner.go:130] > # 	"operations",
	I0528 21:12:51.792570   40275 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0528 21:12:51.792574   40275 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0528 21:12:51.792580   40275 command_runner.go:130] > # 	"operations_errors",
	I0528 21:12:51.792584   40275 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0528 21:12:51.792588   40275 command_runner.go:130] > # 	"image_pulls_by_name",
	I0528 21:12:51.792592   40275 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0528 21:12:51.792598   40275 command_runner.go:130] > # 	"image_pulls_failures",
	I0528 21:12:51.792602   40275 command_runner.go:130] > # 	"image_pulls_successes",
	I0528 21:12:51.792607   40275 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0528 21:12:51.792612   40275 command_runner.go:130] > # 	"image_layer_reuse",
	I0528 21:12:51.792618   40275 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0528 21:12:51.792623   40275 command_runner.go:130] > # 	"containers_oom_total",
	I0528 21:12:51.792627   40275 command_runner.go:130] > # 	"containers_oom",
	I0528 21:12:51.792630   40275 command_runner.go:130] > # 	"processes_defunct",
	I0528 21:12:51.792634   40275 command_runner.go:130] > # 	"operations_total",
	I0528 21:12:51.792638   40275 command_runner.go:130] > # 	"operations_latency_seconds",
	I0528 21:12:51.792642   40275 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0528 21:12:51.792648   40275 command_runner.go:130] > # 	"operations_errors_total",
	I0528 21:12:51.792652   40275 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0528 21:12:51.792659   40275 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0528 21:12:51.792663   40275 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0528 21:12:51.792669   40275 command_runner.go:130] > # 	"image_pulls_success_total",
	I0528 21:12:51.792672   40275 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0528 21:12:51.792676   40275 command_runner.go:130] > # 	"containers_oom_count_total",
	I0528 21:12:51.792684   40275 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0528 21:12:51.792689   40275 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0528 21:12:51.792692   40275 command_runner.go:130] > # ]
	I0528 21:12:51.792697   40275 command_runner.go:130] > # The port on which the metrics server will listen.
	I0528 21:12:51.792703   40275 command_runner.go:130] > # metrics_port = 9090
	I0528 21:12:51.792708   40275 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0528 21:12:51.792713   40275 command_runner.go:130] > # metrics_socket = ""
	I0528 21:12:51.792718   40275 command_runner.go:130] > # The certificate for the secure metrics server.
	I0528 21:12:51.792726   40275 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0528 21:12:51.792732   40275 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0528 21:12:51.792740   40275 command_runner.go:130] > # certificate on any modification event.
	I0528 21:12:51.792744   40275 command_runner.go:130] > # metrics_cert = ""
	I0528 21:12:51.792749   40275 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0528 21:12:51.792755   40275 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0528 21:12:51.792758   40275 command_runner.go:130] > # metrics_key = ""
	I0528 21:12:51.792763   40275 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0528 21:12:51.792769   40275 command_runner.go:130] > [crio.tracing]
	I0528 21:12:51.792775   40275 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0528 21:12:51.792781   40275 command_runner.go:130] > # enable_tracing = false
	I0528 21:12:51.792786   40275 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0528 21:12:51.792792   40275 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0528 21:12:51.792798   40275 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0528 21:12:51.792805   40275 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0528 21:12:51.792809   40275 command_runner.go:130] > # CRI-O NRI configuration.
	I0528 21:12:51.792815   40275 command_runner.go:130] > [crio.nri]
	I0528 21:12:51.792819   40275 command_runner.go:130] > # Globally enable or disable NRI.
	I0528 21:12:51.792823   40275 command_runner.go:130] > # enable_nri = false
	I0528 21:12:51.792827   40275 command_runner.go:130] > # NRI socket to listen on.
	I0528 21:12:51.792833   40275 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0528 21:12:51.792838   40275 command_runner.go:130] > # NRI plugin directory to use.
	I0528 21:12:51.792844   40275 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0528 21:12:51.792849   40275 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0528 21:12:51.792854   40275 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0528 21:12:51.792861   40275 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0528 21:12:51.792865   40275 command_runner.go:130] > # nri_disable_connections = false
	I0528 21:12:51.792873   40275 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0528 21:12:51.792877   40275 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0528 21:12:51.792884   40275 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0528 21:12:51.792889   40275 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0528 21:12:51.792897   40275 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0528 21:12:51.792901   40275 command_runner.go:130] > [crio.stats]
	I0528 21:12:51.792909   40275 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0528 21:12:51.792914   40275 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0528 21:12:51.792920   40275 command_runner.go:130] > # stats_collection_period = 0
	I0528 21:12:51.792954   40275 command_runner.go:130] ! time="2024-05-28 21:12:51.748567936Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0528 21:12:51.792979   40275 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0528 21:12:51.793097   40275 cni.go:84] Creating CNI manager for ""
	I0528 21:12:51.793111   40275 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0528 21:12:51.793121   40275 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:12:51.793149   40275 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-869191 NodeName:multinode-869191 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 21:12:51.793280   40275 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-869191"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:12:51.793336   40275 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 21:12:51.804121   40275 command_runner.go:130] > kubeadm
	I0528 21:12:51.804142   40275 command_runner.go:130] > kubectl
	I0528 21:12:51.804146   40275 command_runner.go:130] > kubelet
	I0528 21:12:51.804167   40275 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:12:51.804238   40275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:12:51.814551   40275 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0528 21:12:51.832448   40275 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:12:51.849999   40275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0528 21:12:51.866911   40275 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0528 21:12:51.871087   40275 command_runner.go:130] > 192.168.39.65	control-plane.minikube.internal
	I0528 21:12:51.871166   40275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:12:52.009893   40275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:12:52.025381   40275 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191 for IP: 192.168.39.65
	I0528 21:12:52.025420   40275 certs.go:194] generating shared ca certs ...
	I0528 21:12:52.025440   40275 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:12:52.025642   40275 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 21:12:52.025703   40275 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 21:12:52.025719   40275 certs.go:256] generating profile certs ...
	I0528 21:12:52.025852   40275 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/client.key
	I0528 21:12:52.025953   40275 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/apiserver.key.f28ac419
	I0528 21:12:52.026004   40275 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/proxy-client.key
	I0528 21:12:52.026017   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0528 21:12:52.026033   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0528 21:12:52.026059   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0528 21:12:52.026076   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0528 21:12:52.026092   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0528 21:12:52.026111   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0528 21:12:52.026130   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0528 21:12:52.026144   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0528 21:12:52.026205   40275 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 21:12:52.026280   40275 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 21:12:52.026294   40275 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 21:12:52.026330   40275 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 21:12:52.026361   40275 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:12:52.026397   40275 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 21:12:52.026440   40275 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:12:52.026468   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> /usr/share/ca-certificates/117602.pem
	I0528 21:12:52.026485   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:12:52.026497   40275 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem -> /usr/share/ca-certificates/11760.pem
	I0528 21:12:52.027085   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:12:52.052972   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:12:52.077548   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:12:52.102642   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:12:52.127998   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 21:12:52.151885   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 21:12:52.175643   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:12:52.198635   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/multinode-869191/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 21:12:52.222689   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 21:12:52.248536   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:12:52.274644   40275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 21:12:52.299251   40275 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:12:52.339602   40275 ssh_runner.go:195] Run: openssl version
	I0528 21:12:52.345751   40275 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0528 21:12:52.345836   40275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 21:12:52.356730   40275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 21:12:52.361922   40275 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:12:52.362069   40275 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:12:52.362122   40275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 21:12:52.368147   40275 command_runner.go:130] > 51391683
	I0528 21:12:52.368440   40275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 21:12:52.377946   40275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 21:12:52.388802   40275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 21:12:52.393314   40275 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:12:52.393442   40275 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:12:52.393481   40275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 21:12:52.399032   40275 command_runner.go:130] > 3ec20f2e
	I0528 21:12:52.399273   40275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 21:12:52.408513   40275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:12:52.419175   40275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:12:52.423781   40275 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:12:52.423805   40275 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:12:52.423835   40275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:12:52.430161   40275 command_runner.go:130] > b5213941
	I0528 21:12:52.430216   40275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:12:52.439365   40275 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:12:52.443652   40275 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:12:52.443667   40275 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0528 21:12:52.443673   40275 command_runner.go:130] > Device: 253,1	Inode: 8386582     Links: 1
	I0528 21:12:52.443682   40275 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0528 21:12:52.443692   40275 command_runner.go:130] > Access: 2024-05-28 21:06:39.699064296 +0000
	I0528 21:12:52.443701   40275 command_runner.go:130] > Modify: 2024-05-28 21:06:39.699064296 +0000
	I0528 21:12:52.443709   40275 command_runner.go:130] > Change: 2024-05-28 21:06:39.699064296 +0000
	I0528 21:12:52.443715   40275 command_runner.go:130] >  Birth: 2024-05-28 21:06:39.699064296 +0000
	I0528 21:12:52.443848   40275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 21:12:52.449812   40275 command_runner.go:130] > Certificate will not expire
	I0528 21:12:52.449862   40275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 21:12:52.455490   40275 command_runner.go:130] > Certificate will not expire
	I0528 21:12:52.455683   40275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 21:12:52.461362   40275 command_runner.go:130] > Certificate will not expire
	I0528 21:12:52.461645   40275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 21:12:52.467901   40275 command_runner.go:130] > Certificate will not expire
	I0528 21:12:52.467971   40275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 21:12:52.473712   40275 command_runner.go:130] > Certificate will not expire
	I0528 21:12:52.473796   40275 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 21:12:52.480027   40275 command_runner.go:130] > Certificate will not expire
	I0528 21:12:52.480109   40275 kubeadm.go:391] StartCluster: {Name:multinode-869191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-869191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.154 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:12:52.480265   40275 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:12:52.480330   40275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:12:52.517096   40275 command_runner.go:130] > bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337
	I0528 21:12:52.517116   40275 command_runner.go:130] > 3fbc4ce7e67f1b2a0b85ca56319e69227c950c165c7baaa1a4430a7fae20be76
	I0528 21:12:52.517123   40275 command_runner.go:130] > c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055
	I0528 21:12:52.517128   40275 command_runner.go:130] > 6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac
	I0528 21:12:52.517133   40275 command_runner.go:130] > 4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0
	I0528 21:12:52.517138   40275 command_runner.go:130] > 1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458
	I0528 21:12:52.517143   40275 command_runner.go:130] > 64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5
	I0528 21:12:52.517150   40275 command_runner.go:130] > e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9
	I0528 21:12:52.518602   40275 cri.go:89] found id: "bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337"
	I0528 21:12:52.518621   40275 cri.go:89] found id: "3fbc4ce7e67f1b2a0b85ca56319e69227c950c165c7baaa1a4430a7fae20be76"
	I0528 21:12:52.518627   40275 cri.go:89] found id: "c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055"
	I0528 21:12:52.518632   40275 cri.go:89] found id: "6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac"
	I0528 21:12:52.518636   40275 cri.go:89] found id: "4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0"
	I0528 21:12:52.518641   40275 cri.go:89] found id: "1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458"
	I0528 21:12:52.518645   40275 cri.go:89] found id: "64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5"
	I0528 21:12:52.518649   40275 cri.go:89] found id: "e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9"
	I0528 21:12:52.518655   40275 cri.go:89] found id: ""
	I0528 21:12:52.518701   40275 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.458922858Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931002458899317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bad42e67-ed74-41f7-a23f-c0e9788765d9 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.459390186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47ab5610-fcde-4ed5-a638-e72fd8120551 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.459448428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47ab5610-fcde-4ed5-a638-e72fd8120551 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.459775956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d459e1e6230b2bc259c2cfe2705e236bd61bb34da278adef4636f8343fff8,PodSandboxId:d2f96bf8a39d494580400adabf53857c4e386c8dcb1362b00ab04d496415c96c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716930812503603829,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252b9a44a28e675b9f3ec426ddf2b2aba3af066f479de4092e3984bab9b4bdff,PodSandboxId:e476c463b3a6a2fdb96a62e5417d069fb550ac656070ad7b11b607ef9ca879a9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716930779044101717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4136b5bbb1fb11d4eb9b12e9bfb612b35c44bb147609cf3895c0d06f2e74fc6f,PodSandboxId:9f7fd13849b4d95056104af0680035bfb2c6849cd3148d0ebe3dd1506798fbaf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716930778892472387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc094a6daa47a8046b8497314256954e60b46b95bd2991e15338af3c3e6a9ae7,PodSandboxId:b14b96dafa937faa66fe5d1b341110baca885c8e88b87c34755c133a729a7db6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716930778838695092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]
string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72434233ddf0cf45536340a7aa617a6d64512d73e0811d01074b2a626d43f79c,PodSandboxId:468780c9e91e5b9a0a73de12ffeb3cfa868878a69a66c11fa7d029d39f2c2776,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716930778757028056,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b852ca44def88ca8173aed69919592580b7bd4d84c154208ea11acd9e2737eb9,PodSandboxId:59f350a36f234501e2aa4d79d488bf846f36b0ea20e18b685396a08a6b7fe36d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716930774961307938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acf49a269336371c288f456c763a8f428cec04ed7f1062c7205ec7389032a2a,PodSandboxId:cfece9dd614bcdc4525a9fde0db763cf742c17664fadf84b084efc3fb49bde24,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716930774986948869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067e3bfe9287fedd95a060e3d4b3b951427c0cb001493040d415db25f3f47d65,PodSandboxId:771af933b235ff9d38a752a3b25823afdfb643624815725a013a1b25c70e35f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716930775021769207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fff07e29b61aaaf721af4f63571b7d45d059365093210ebd0a2ce382f0244b5b,PodSandboxId:8441eeb6f8cc28fd102d0cc70272043bd2d8c7fbfd607fb802e5eef8e8f25bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716930774935473608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893ade671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ad8633ed09b8a5144fc6c780d4e37526722b597b04e9d62eddf8487685aced,PodSandboxId:2e646fda19ef38cd2073732544812dbcfe794b780fd168527446e897d29e03f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716930471845338505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337,PodSandboxId:352504b5fcfc2cffc7c153ce015bcfdb9670ecbdc6a29d12fadb53f64e0bfac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716930429420079030,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbc4ce7e67f1b2a0b85ca56319e69227c950c165c7baaa1a4430a7fae20be76,PodSandboxId:77ec75c669211ddf9014581b16023697624006e642344f3d51ff6671e4d5650a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716930429360324308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kubernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055,PodSandboxId:0a32aac2f1b58efbd4104d0e4ab1101ab2af143557828d77d3abb8c6f6dc588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716930427927354132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac,PodSandboxId:d775edc01835ffc1f7fbf18983e1140cea4768191758fa2ec6ab0906825250d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716930425428133632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458,PodSandboxId:7ffa32fc2e8314746f95abd726fe03996c5bd26dbef18b73fdd3d67583621694,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716930403955640868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0,PodSandboxId:bcc4adde8f6d091ae8330a77072b3c75c0932b884e7dde9737e40d71e6cf20c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716930403990649236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893a
de671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5,PodSandboxId:7b5f0784667d4632f452c9f886fe00afdcda711d49621d66dc4ffa5bcbe0992b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716930403917319856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9,PodSandboxId:814de41b91317a66ef1b490e7dbef6b3c9f38e667341f0fbac189a3fda9a4b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716930403846678107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47ab5610-fcde-4ed5-a638-e72fd8120551 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.502350205Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dab7dd66-7262-471e-8a15-b3275d3bfd48 name=/runtime.v1.RuntimeService/Version
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.502444516Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dab7dd66-7262-471e-8a15-b3275d3bfd48 name=/runtime.v1.RuntimeService/Version
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.503583335Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91086941-03d5-4142-8e05-23c8caf7943e name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.504349116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931002504294922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91086941-03d5-4142-8e05-23c8caf7943e name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.505117122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55f09a93-ce96-4c17-9422-2385c9273b48 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.505174004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55f09a93-ce96-4c17-9422-2385c9273b48 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.505533615Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d459e1e6230b2bc259c2cfe2705e236bd61bb34da278adef4636f8343fff8,PodSandboxId:d2f96bf8a39d494580400adabf53857c4e386c8dcb1362b00ab04d496415c96c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716930812503603829,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252b9a44a28e675b9f3ec426ddf2b2aba3af066f479de4092e3984bab9b4bdff,PodSandboxId:e476c463b3a6a2fdb96a62e5417d069fb550ac656070ad7b11b607ef9ca879a9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716930779044101717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4136b5bbb1fb11d4eb9b12e9bfb612b35c44bb147609cf3895c0d06f2e74fc6f,PodSandboxId:9f7fd13849b4d95056104af0680035bfb2c6849cd3148d0ebe3dd1506798fbaf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716930778892472387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc094a6daa47a8046b8497314256954e60b46b95bd2991e15338af3c3e6a9ae7,PodSandboxId:b14b96dafa937faa66fe5d1b341110baca885c8e88b87c34755c133a729a7db6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716930778838695092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]
string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72434233ddf0cf45536340a7aa617a6d64512d73e0811d01074b2a626d43f79c,PodSandboxId:468780c9e91e5b9a0a73de12ffeb3cfa868878a69a66c11fa7d029d39f2c2776,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716930778757028056,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b852ca44def88ca8173aed69919592580b7bd4d84c154208ea11acd9e2737eb9,PodSandboxId:59f350a36f234501e2aa4d79d488bf846f36b0ea20e18b685396a08a6b7fe36d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716930774961307938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acf49a269336371c288f456c763a8f428cec04ed7f1062c7205ec7389032a2a,PodSandboxId:cfece9dd614bcdc4525a9fde0db763cf742c17664fadf84b084efc3fb49bde24,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716930774986948869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067e3bfe9287fedd95a060e3d4b3b951427c0cb001493040d415db25f3f47d65,PodSandboxId:771af933b235ff9d38a752a3b25823afdfb643624815725a013a1b25c70e35f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716930775021769207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fff07e29b61aaaf721af4f63571b7d45d059365093210ebd0a2ce382f0244b5b,PodSandboxId:8441eeb6f8cc28fd102d0cc70272043bd2d8c7fbfd607fb802e5eef8e8f25bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716930774935473608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893ade671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ad8633ed09b8a5144fc6c780d4e37526722b597b04e9d62eddf8487685aced,PodSandboxId:2e646fda19ef38cd2073732544812dbcfe794b780fd168527446e897d29e03f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716930471845338505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337,PodSandboxId:352504b5fcfc2cffc7c153ce015bcfdb9670ecbdc6a29d12fadb53f64e0bfac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716930429420079030,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbc4ce7e67f1b2a0b85ca56319e69227c950c165c7baaa1a4430a7fae20be76,PodSandboxId:77ec75c669211ddf9014581b16023697624006e642344f3d51ff6671e4d5650a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716930429360324308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kubernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055,PodSandboxId:0a32aac2f1b58efbd4104d0e4ab1101ab2af143557828d77d3abb8c6f6dc588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716930427927354132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac,PodSandboxId:d775edc01835ffc1f7fbf18983e1140cea4768191758fa2ec6ab0906825250d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716930425428133632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458,PodSandboxId:7ffa32fc2e8314746f95abd726fe03996c5bd26dbef18b73fdd3d67583621694,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716930403955640868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0,PodSandboxId:bcc4adde8f6d091ae8330a77072b3c75c0932b884e7dde9737e40d71e6cf20c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716930403990649236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893a
de671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5,PodSandboxId:7b5f0784667d4632f452c9f886fe00afdcda711d49621d66dc4ffa5bcbe0992b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716930403917319856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9,PodSandboxId:814de41b91317a66ef1b490e7dbef6b3c9f38e667341f0fbac189a3fda9a4b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716930403846678107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55f09a93-ce96-4c17-9422-2385c9273b48 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.545995833Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d4b5960-6daa-43f3-9e90-75d7dc97c036 name=/runtime.v1.RuntimeService/Version
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.546074727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d4b5960-6daa-43f3-9e90-75d7dc97c036 name=/runtime.v1.RuntimeService/Version
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.547040849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ed21326-a8a4-440b-ab60-cebbe00e4b7d name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.547654476Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931002547623793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ed21326-a8a4-440b-ab60-cebbe00e4b7d name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.548279240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=650ef12f-052b-4ebc-9af5-9f91305035b9 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.548361203Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=650ef12f-052b-4ebc-9af5-9f91305035b9 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.548678886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d459e1e6230b2bc259c2cfe2705e236bd61bb34da278adef4636f8343fff8,PodSandboxId:d2f96bf8a39d494580400adabf53857c4e386c8dcb1362b00ab04d496415c96c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716930812503603829,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252b9a44a28e675b9f3ec426ddf2b2aba3af066f479de4092e3984bab9b4bdff,PodSandboxId:e476c463b3a6a2fdb96a62e5417d069fb550ac656070ad7b11b607ef9ca879a9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716930779044101717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4136b5bbb1fb11d4eb9b12e9bfb612b35c44bb147609cf3895c0d06f2e74fc6f,PodSandboxId:9f7fd13849b4d95056104af0680035bfb2c6849cd3148d0ebe3dd1506798fbaf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716930778892472387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc094a6daa47a8046b8497314256954e60b46b95bd2991e15338af3c3e6a9ae7,PodSandboxId:b14b96dafa937faa66fe5d1b341110baca885c8e88b87c34755c133a729a7db6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716930778838695092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]
string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72434233ddf0cf45536340a7aa617a6d64512d73e0811d01074b2a626d43f79c,PodSandboxId:468780c9e91e5b9a0a73de12ffeb3cfa868878a69a66c11fa7d029d39f2c2776,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716930778757028056,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b852ca44def88ca8173aed69919592580b7bd4d84c154208ea11acd9e2737eb9,PodSandboxId:59f350a36f234501e2aa4d79d488bf846f36b0ea20e18b685396a08a6b7fe36d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716930774961307938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acf49a269336371c288f456c763a8f428cec04ed7f1062c7205ec7389032a2a,PodSandboxId:cfece9dd614bcdc4525a9fde0db763cf742c17664fadf84b084efc3fb49bde24,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716930774986948869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067e3bfe9287fedd95a060e3d4b3b951427c0cb001493040d415db25f3f47d65,PodSandboxId:771af933b235ff9d38a752a3b25823afdfb643624815725a013a1b25c70e35f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716930775021769207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fff07e29b61aaaf721af4f63571b7d45d059365093210ebd0a2ce382f0244b5b,PodSandboxId:8441eeb6f8cc28fd102d0cc70272043bd2d8c7fbfd607fb802e5eef8e8f25bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716930774935473608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893ade671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ad8633ed09b8a5144fc6c780d4e37526722b597b04e9d62eddf8487685aced,PodSandboxId:2e646fda19ef38cd2073732544812dbcfe794b780fd168527446e897d29e03f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716930471845338505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337,PodSandboxId:352504b5fcfc2cffc7c153ce015bcfdb9670ecbdc6a29d12fadb53f64e0bfac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716930429420079030,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbc4ce7e67f1b2a0b85ca56319e69227c950c165c7baaa1a4430a7fae20be76,PodSandboxId:77ec75c669211ddf9014581b16023697624006e642344f3d51ff6671e4d5650a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716930429360324308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kubernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055,PodSandboxId:0a32aac2f1b58efbd4104d0e4ab1101ab2af143557828d77d3abb8c6f6dc588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716930427927354132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac,PodSandboxId:d775edc01835ffc1f7fbf18983e1140cea4768191758fa2ec6ab0906825250d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716930425428133632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458,PodSandboxId:7ffa32fc2e8314746f95abd726fe03996c5bd26dbef18b73fdd3d67583621694,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716930403955640868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0,PodSandboxId:bcc4adde8f6d091ae8330a77072b3c75c0932b884e7dde9737e40d71e6cf20c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716930403990649236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893a
de671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5,PodSandboxId:7b5f0784667d4632f452c9f886fe00afdcda711d49621d66dc4ffa5bcbe0992b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716930403917319856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9,PodSandboxId:814de41b91317a66ef1b490e7dbef6b3c9f38e667341f0fbac189a3fda9a4b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716930403846678107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=650ef12f-052b-4ebc-9af5-9f91305035b9 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.588431265Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ee9b308-ae21-4bb1-8f74-5c410cc1cd35 name=/runtime.v1.RuntimeService/Version
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.588495309Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ee9b308-ae21-4bb1-8f74-5c410cc1cd35 name=/runtime.v1.RuntimeService/Version
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.589713831Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=475d07de-f463-41b4-9560-e017194e9397 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.590412649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931002590390297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=475d07de-f463-41b4-9560-e017194e9397 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.590860689Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c01db955-178a-452a-be86-5d48864a7078 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.591061763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c01db955-178a-452a-be86-5d48864a7078 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:16:42 multinode-869191 crio[2885]: time="2024-05-28 21:16:42.591478071Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d459e1e6230b2bc259c2cfe2705e236bd61bb34da278adef4636f8343fff8,PodSandboxId:d2f96bf8a39d494580400adabf53857c4e386c8dcb1362b00ab04d496415c96c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1716930812503603829,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:252b9a44a28e675b9f3ec426ddf2b2aba3af066f479de4092e3984bab9b4bdff,PodSandboxId:e476c463b3a6a2fdb96a62e5417d069fb550ac656070ad7b11b607ef9ca879a9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1716930779044101717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4136b5bbb1fb11d4eb9b12e9bfb612b35c44bb147609cf3895c0d06f2e74fc6f,PodSandboxId:9f7fd13849b4d95056104af0680035bfb2c6849cd3148d0ebe3dd1506798fbaf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716930778892472387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc094a6daa47a8046b8497314256954e60b46b95bd2991e15338af3c3e6a9ae7,PodSandboxId:b14b96dafa937faa66fe5d1b341110baca885c8e88b87c34755c133a729a7db6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716930778838695092,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]
string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72434233ddf0cf45536340a7aa617a6d64512d73e0811d01074b2a626d43f79c,PodSandboxId:468780c9e91e5b9a0a73de12ffeb3cfa868878a69a66c11fa7d029d39f2c2776,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716930778757028056,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b852ca44def88ca8173aed69919592580b7bd4d84c154208ea11acd9e2737eb9,PodSandboxId:59f350a36f234501e2aa4d79d488bf846f36b0ea20e18b685396a08a6b7fe36d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716930774961307938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3acf49a269336371c288f456c763a8f428cec04ed7f1062c7205ec7389032a2a,PodSandboxId:cfece9dd614bcdc4525a9fde0db763cf742c17664fadf84b084efc3fb49bde24,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716930774986948869,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:067e3bfe9287fedd95a060e3d4b3b951427c0cb001493040d415db25f3f47d65,PodSandboxId:771af933b235ff9d38a752a3b25823afdfb643624815725a013a1b25c70e35f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716930775021769207,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fff07e29b61aaaf721af4f63571b7d45d059365093210ebd0a2ce382f0244b5b,PodSandboxId:8441eeb6f8cc28fd102d0cc70272043bd2d8c7fbfd607fb802e5eef8e8f25bf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716930774935473608,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893ade671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3ad8633ed09b8a5144fc6c780d4e37526722b597b04e9d62eddf8487685aced,PodSandboxId:2e646fda19ef38cd2073732544812dbcfe794b780fd168527446e897d29e03f0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1716930471845338505,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-qqxb7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8887a9a-26fd-42dd-b3c5-9ff88f628dae,},Annotations:map[string]string{io.kubernetes.container.hash: 171d3763,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337,PodSandboxId:352504b5fcfc2cffc7c153ce015bcfdb9670ecbdc6a29d12fadb53f64e0bfac1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716930429420079030,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-mj9rx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdacf113-fef4-4a34-af75-2a7908dca02f,},Annotations:map[string]string{io.kubernetes.container.hash: 35118a2f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbc4ce7e67f1b2a0b85ca56319e69227c950c165c7baaa1a4430a7fae20be76,PodSandboxId:77ec75c669211ddf9014581b16023697624006e642344f3d51ff6671e4d5650a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716930429360324308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 29c00081-275d-4209-bf8a-74849ccf882c,},Annotations:map[string]string{io.kubernetes.container.hash: 20b43327,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055,PodSandboxId:0a32aac2f1b58efbd4104d0e4ab1101ab2af143557828d77d3abb8c6f6dc588b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1716930427927354132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-24k26,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59c6483f-f65f-490c-8b1e-7b0b425a80cf,},Annotations:map[string]string{io.kubernetes.container.hash: 9426a2bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac,PodSandboxId:d775edc01835ffc1f7fbf18983e1140cea4768191758fa2ec6ab0906825250d0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1716930425428133632,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sj7k8,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9619acba-a019-4080-8c86-f63e7ce399bb,},Annotations:map[string]string{io.kubernetes.container.hash: 8c26f61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458,PodSandboxId:7ffa32fc2e8314746f95abd726fe03996c5bd26dbef18b73fdd3d67583621694,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716930403955640868,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e
6ff0e976b71b85998ebb889a77071f,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0,PodSandboxId:bcc4adde8f6d091ae8330a77072b3c75c0932b884e7dde9737e40d71e6cf20c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716930403990649236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41ae7fb09cc32893a
de671b91f69afc3,},Annotations:map[string]string{io.kubernetes.container.hash: be999250,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5,PodSandboxId:7b5f0784667d4632f452c9f886fe00afdcda711d49621d66dc4ffa5bcbe0992b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716930403917319856,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8c10b71b468300869c8dff507045cfc,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 5b0a154e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9,PodSandboxId:814de41b91317a66ef1b490e7dbef6b3c9f38e667341f0fbac189a3fda9a4b4f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716930403846678107,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-869191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92a4fb30036b6ac2c880fe15ce44d259,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c01db955-178a-452a-be86-5d48864a7078 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	468d459e1e623       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   d2f96bf8a39d4       busybox-fc5497c4f-qqxb7
	252b9a44a28e6       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               1                   e476c463b3a6a       kindnet-24k26
	4136b5bbb1fb1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   9f7fd13849b4d       coredns-7db6d8ff4d-mj9rx
	dc094a6daa47a       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      3 minutes ago       Running             kube-proxy                1                   b14b96dafa937       kube-proxy-sj7k8
	72434233ddf0c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   468780c9e91e5       storage-provisioner
	067e3bfe9287f       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      3 minutes ago       Running             kube-controller-manager   1                   771af933b235f       kube-controller-manager-multinode-869191
	3acf49a269336       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   cfece9dd614bc       etcd-multinode-869191
	b852ca44def88       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      3 minutes ago       Running             kube-scheduler            1                   59f350a36f234       kube-scheduler-multinode-869191
	fff07e29b61aa       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      3 minutes ago       Running             kube-apiserver            1                   8441eeb6f8cc2       kube-apiserver-multinode-869191
	b3ad8633ed09b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   2e646fda19ef3       busybox-fc5497c4f-qqxb7
	bfc4c2fb4e8cc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   352504b5fcfc2       coredns-7db6d8ff4d-mj9rx
	3fbc4ce7e67f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   77ec75c669211       storage-provisioner
	c3c2b6923bfc3       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    9 minutes ago       Exited              kindnet-cni               0                   0a32aac2f1b58       kindnet-24k26
	6025504364d6e       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      9 minutes ago       Exited              kube-proxy                0                   d775edc01835f       kube-proxy-sj7k8
	4952f4946567c       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      9 minutes ago       Exited              kube-apiserver            0                   bcc4adde8f6d0       kube-apiserver-multinode-869191
	1aa37e66c1574       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      9 minutes ago       Exited              kube-scheduler            0                   7ffa32fc2e831       kube-scheduler-multinode-869191
	64b17a6d3213b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Exited              etcd                      0                   7b5f0784667d4       etcd-multinode-869191
	e2197d4ac3e76       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      9 minutes ago       Exited              kube-controller-manager   0                   814de41b91317       kube-controller-manager-multinode-869191
	
	
	==> coredns [4136b5bbb1fb11d4eb9b12e9bfb612b35c44bb147609cf3895c0d06f2e74fc6f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35187 - 56157 "HINFO IN 1220250268852440767.8569700546674494853. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008912279s
	
	
	==> coredns [bfc4c2fb4e8cc12564c84e740611c5476bfa52dc073b4104d30a3a2d46e3c337] <==
	[INFO] 10.244.0.3:53899 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00156546s
	[INFO] 10.244.0.3:40765 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000048349s
	[INFO] 10.244.0.3:34881 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037411s
	[INFO] 10.244.0.3:48787 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001163899s
	[INFO] 10.244.0.3:57425 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094486s
	[INFO] 10.244.0.3:36844 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000031395s
	[INFO] 10.244.0.3:39117 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000028768s
	[INFO] 10.244.1.2:49235 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011624s
	[INFO] 10.244.1.2:59719 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105642s
	[INFO] 10.244.1.2:58585 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064667s
	[INFO] 10.244.1.2:33081 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054628s
	[INFO] 10.244.0.3:59307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112683s
	[INFO] 10.244.0.3:51157 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079759s
	[INFO] 10.244.0.3:32830 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062854s
	[INFO] 10.244.0.3:59588 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067153s
	[INFO] 10.244.1.2:53725 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231853s
	[INFO] 10.244.1.2:56138 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153165s
	[INFO] 10.244.1.2:53150 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000144621s
	[INFO] 10.244.1.2:58929 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139937s
	[INFO] 10.244.0.3:49565 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000073314s
	[INFO] 10.244.0.3:43790 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000039622s
	[INFO] 10.244.0.3:58158 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000043412s
	[INFO] 10.244.0.3:58376 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000033006s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-869191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-869191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=multinode-869191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T21_06_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:06:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-869191
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:16:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:12:58 +0000   Tue, 28 May 2024 21:06:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:12:58 +0000   Tue, 28 May 2024 21:06:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:12:58 +0000   Tue, 28 May 2024 21:06:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:12:58 +0000   Tue, 28 May 2024 21:07:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.65
	  Hostname:    multinode-869191
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f68408ecc464d1d950bbc1d9e9539d7
	  System UUID:                9f68408e-cc46-4d1d-950b-bc1d9e9539d7
	  Boot ID:                    10994f05-03c3-4424-8036-ffdd7c4224ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qqxb7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m54s
	  kube-system                 coredns-7db6d8ff4d-mj9rx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m39s
	  kube-system                 etcd-multinode-869191                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m53s
	  kube-system                 kindnet-24k26                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m39s
	  kube-system                 kube-apiserver-multinode-869191             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	  kube-system                 kube-controller-manager-multinode-869191    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	  kube-system                 kube-proxy-sj7k8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	  kube-system                 kube-scheduler-multinode-869191             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m37s                  kube-proxy       
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m59s (x8 over 9m59s)  kubelet          Node multinode-869191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m59s (x8 over 9m59s)  kubelet          Node multinode-869191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m59s (x7 over 9m59s)  kubelet          Node multinode-869191 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m53s                  kubelet          Node multinode-869191 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  9m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m53s                  kubelet          Node multinode-869191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m53s                  kubelet          Node multinode-869191 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m53s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m40s                  node-controller  Node multinode-869191 event: Registered Node multinode-869191 in Controller
	  Normal  NodeReady                9m34s                  kubelet          Node multinode-869191 status is now: NodeReady
	  Normal  Starting                 3m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s (x8 over 3m48s)  kubelet          Node multinode-869191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x8 over 3m48s)  kubelet          Node multinode-869191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x7 over 3m48s)  kubelet          Node multinode-869191 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-869191 event: Registered Node multinode-869191 in Controller
	
	
	Name:               multinode-869191-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-869191-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=multinode-869191
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_28T21_13_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:13:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-869191-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:14:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 28 May 2024 21:14:09 +0000   Tue, 28 May 2024 21:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 28 May 2024 21:14:09 +0000   Tue, 28 May 2024 21:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 28 May 2024 21:14:09 +0000   Tue, 28 May 2024 21:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 28 May 2024 21:14:09 +0000   Tue, 28 May 2024 21:15:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    multinode-869191-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3fb5ce73eb7043c7bf039afda03c6296
	  System UUID:                3fb5ce73-eb70-43c7-bf03-9afda03c6296
	  Boot ID:                    3f29dd25-66c9-4380-afd3-8e6f3230aa31
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hz7j8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 kindnet-72k82              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m5s
	  kube-system                 kube-proxy-k7csx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m59s                kube-proxy       
	  Normal  Starting                 9m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  9m6s (x2 over 9m6s)  kubelet          Node multinode-869191-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m6s (x2 over 9m6s)  kubelet          Node multinode-869191-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m6s (x2 over 9m6s)  kubelet          Node multinode-869191-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m56s                kubelet          Node multinode-869191-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)  kubelet          Node multinode-869191-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)  kubelet          Node multinode-869191-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)  kubelet          Node multinode-869191-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m54s                kubelet          Node multinode-869191-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                 node-controller  Node multinode-869191-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.059236] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061545] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.177352] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.112604] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.256655] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.077738] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.677772] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.062683] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.479704] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.069056] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.219958] kauditd_printk_skb: 18 callbacks suppressed
	[May28 21:07] systemd-fstab-generator[1469]: Ignoring "noauto" option for root device
	[  +5.423914] kauditd_printk_skb: 56 callbacks suppressed
	[ +40.408980] kauditd_printk_skb: 16 callbacks suppressed
	[May28 21:12] systemd-fstab-generator[2800]: Ignoring "noauto" option for root device
	[  +0.143911] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.177152] systemd-fstab-generator[2827]: Ignoring "noauto" option for root device
	[  +0.143813] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.279522] systemd-fstab-generator[2868]: Ignoring "noauto" option for root device
	[  +1.335847] systemd-fstab-generator[2970]: Ignoring "noauto" option for root device
	[  +2.123244] systemd-fstab-generator[3095]: Ignoring "noauto" option for root device
	[  +1.006466] kauditd_printk_skb: 164 callbacks suppressed
	[May28 21:13] kauditd_printk_skb: 52 callbacks suppressed
	[  +3.037636] systemd-fstab-generator[3915]: Ignoring "noauto" option for root device
	[ +18.402282] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [3acf49a269336371c288f456c763a8f428cec04ed7f1062c7205ec7389032a2a] <==
	{"level":"info","ts":"2024-05-28T21:12:55.633296Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T21:12:55.633544Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T21:12:55.634288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 switched to configuration voters=(13943064398224023591)"}
	{"level":"info","ts":"2024-05-28T21:12:55.638524Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f0d16ed1ce05ac0e","local-member-id":"c17fb7325889e027","added-peer-id":"c17fb7325889e027","added-peer-peer-urls":["https://192.168.39.65:2380"]}
	{"level":"info","ts":"2024-05-28T21:12:55.638954Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0d16ed1ce05ac0e","local-member-id":"c17fb7325889e027","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:12:55.641309Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:12:55.647965Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-28T21:12:55.648263Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c17fb7325889e027","initial-advertise-peer-urls":["https://192.168.39.65:2380"],"listen-peer-urls":["https://192.168.39.65:2380"],"advertise-client-urls":["https://192.168.39.65:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.65:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T21:12:55.648312Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T21:12:55.649471Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.65:2380"}
	{"level":"info","ts":"2024-05-28T21:12:55.649507Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.65:2380"}
	{"level":"info","ts":"2024-05-28T21:12:56.843955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-28T21:12:56.844063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-28T21:12:56.844141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 received MsgPreVoteResp from c17fb7325889e027 at term 2"}
	{"level":"info","ts":"2024-05-28T21:12:56.844176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 became candidate at term 3"}
	{"level":"info","ts":"2024-05-28T21:12:56.844272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 received MsgVoteResp from c17fb7325889e027 at term 3"}
	{"level":"info","ts":"2024-05-28T21:12:56.844302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c17fb7325889e027 became leader at term 3"}
	{"level":"info","ts":"2024-05-28T21:12:56.844333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c17fb7325889e027 elected leader c17fb7325889e027 at term 3"}
	{"level":"info","ts":"2024-05-28T21:12:56.851884Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c17fb7325889e027","local-member-attributes":"{Name:multinode-869191 ClientURLs:[https://192.168.39.65:2379]}","request-path":"/0/members/c17fb7325889e027/attributes","cluster-id":"f0d16ed1ce05ac0e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T21:12:56.852111Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:12:56.852343Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T21:12:56.852386Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T21:12:56.852796Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:12:56.854817Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-28T21:12:56.854989Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.65:2379"}
	
	
	==> etcd [64b17a6d3213b35e9534a4c21ffacfb062961eb24e362dd0de814efb7f3a03d5] <==
	{"level":"warn","ts":"2024-05-28T21:08:24.907181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.187322ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16152036647794359588 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-869191-m03.17d3c33280d8ce0b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-869191-m03.17d3c33280d8ce0b\" value_size:646 lease:6928664610939583525 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-05-28T21:08:24.907483Z","caller":"traceutil/trace.go:171","msg":"trace[364224060] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"187.529019ms","start":"2024-05-28T21:08:24.719923Z","end":"2024-05-28T21:08:24.907452Z","steps":["trace[364224060] 'process raft request'  (duration: 187.487069ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:08:24.907621Z","caller":"traceutil/trace.go:171","msg":"trace[1714664065] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"259.527417ms","start":"2024-05-28T21:08:24.648087Z","end":"2024-05-28T21:08:24.907614Z","steps":["trace[1714664065] 'process raft request'  (duration: 78.805916ms)","trace[1714664065] 'compare'  (duration: 180.035866ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T21:08:24.907783Z","caller":"traceutil/trace.go:171","msg":"trace[1256363740] linearizableReadLoop","detail":"{readStateIndex:637; appliedIndex:636; }","duration":"256.860934ms","start":"2024-05-28T21:08:24.650909Z","end":"2024-05-28T21:08:24.90777Z","steps":["trace[1256363740] 'read index received'  (duration: 75.991619ms)","trace[1256363740] 'applied index is now lower than readState.Index'  (duration: 180.868335ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T21:08:24.907953Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.035675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-869191-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-05-28T21:08:24.909287Z","caller":"traceutil/trace.go:171","msg":"trace[273515262] range","detail":"{range_begin:/registry/minions/multinode-869191-m03; range_end:; response_count:1; response_revision:606; }","duration":"258.38173ms","start":"2024-05-28T21:08:24.65089Z","end":"2024-05-28T21:08:24.909272Z","steps":["trace[273515262] 'agreement among raft nodes before linearized reading'  (duration: 256.977637ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:08:24.909453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.754218ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-869191-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-05-28T21:08:24.909498Z","caller":"traceutil/trace.go:171","msg":"trace[1466748700] range","detail":"{range_begin:/registry/minions/multinode-869191-m03; range_end:; response_count:1; response_revision:606; }","duration":"117.823569ms","start":"2024-05-28T21:08:24.791667Z","end":"2024-05-28T21:08:24.90949Z","steps":["trace[1466748700] 'agreement among raft nodes before linearized reading'  (duration: 117.755921ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:08:29.75937Z","caller":"traceutil/trace.go:171","msg":"trace[1910362574] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"108.725124ms","start":"2024-05-28T21:08:29.650624Z","end":"2024-05-28T21:08:29.759349Z","steps":["trace[1910362574] 'process raft request'  (duration: 108.518862ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:08:30.057847Z","caller":"traceutil/trace.go:171","msg":"trace[1654002351] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"200.337379ms","start":"2024-05-28T21:08:29.857486Z","end":"2024-05-28T21:08:30.057824Z","steps":["trace[1654002351] 'process raft request'  (duration: 200.187192ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:08:30.058183Z","caller":"traceutil/trace.go:171","msg":"trace[1374969171] linearizableReadLoop","detail":"{readStateIndex:681; appliedIndex:681; }","duration":"107.325907ms","start":"2024-05-28T21:08:29.950841Z","end":"2024-05-28T21:08:30.058167Z","steps":["trace[1374969171] 'read index received'  (duration: 107.318989ms)","trace[1374969171] 'applied index is now lower than readState.Index'  (duration: 5.506µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T21:08:30.058437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.579197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-05-28T21:08:30.058487Z","caller":"traceutil/trace.go:171","msg":"trace[726718063] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:644; }","duration":"107.659346ms","start":"2024-05-28T21:08:29.950817Z","end":"2024-05-28T21:08:30.058477Z","steps":["trace[726718063] 'agreement among raft nodes before linearized reading'  (duration: 107.506163ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:08:30.070306Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.1769ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-869191-m03\" ","response":"range_response_count:1 size:2953"}
	{"level":"info","ts":"2024-05-28T21:08:30.070384Z","caller":"traceutil/trace.go:171","msg":"trace[590012406] range","detail":"{range_begin:/registry/minions/multinode-869191-m03; range_end:; response_count:1; response_revision:645; }","duration":"108.289157ms","start":"2024-05-28T21:08:29.962086Z","end":"2024-05-28T21:08:30.070376Z","steps":["trace[590012406] 'agreement among raft nodes before linearized reading'  (duration: 107.747507ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:11:18.423239Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-28T21:11:18.423387Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-869191","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.65:2380"],"advertise-client-urls":["https://192.168.39.65:2379"]}
	{"level":"warn","ts":"2024-05-28T21:11:18.423502Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T21:11:18.423647Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T21:11:18.47532Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.65:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T21:11:18.475401Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.65:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-28T21:11:18.475527Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c17fb7325889e027","current-leader-member-id":"c17fb7325889e027"}
	{"level":"info","ts":"2024-05-28T21:11:18.480191Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.65:2380"}
	{"level":"info","ts":"2024-05-28T21:11:18.480372Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.65:2380"}
	{"level":"info","ts":"2024-05-28T21:11:18.480397Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-869191","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.65:2380"],"advertise-client-urls":["https://192.168.39.65:2379"]}
	
	
	==> kernel <==
	 21:16:43 up 10 min,  0 users,  load average: 0.35, 0.31, 0.18
	Linux multinode-869191 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [252b9a44a28e675b9f3ec426ddf2b2aba3af066f479de4092e3984bab9b4bdff] <==
	I0528 21:15:39.922998       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:15:49.929616       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:15:49.929699       1 main.go:227] handling current node
	I0528 21:15:49.929722       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:15:49.929749       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:15:59.935525       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:15:59.935739       1 main.go:227] handling current node
	I0528 21:15:59.935784       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:15:59.935802       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:16:09.940532       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:16:09.940587       1 main.go:227] handling current node
	I0528 21:16:09.940602       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:16:09.940607       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:16:19.954569       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:16:19.954608       1 main.go:227] handling current node
	I0528 21:16:19.954676       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:16:19.954681       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:16:29.969365       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:16:29.969402       1 main.go:227] handling current node
	I0528 21:16:29.969480       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:16:29.969501       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:16:39.979839       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:16:39.979924       1 main.go:227] handling current node
	I0528 21:16:39.979947       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:16:39.979963       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [c3c2b6923bfc3246d147f7c457a56329e4d1a73b2d1b7a9c950934b780d0a055] <==
	I0528 21:10:28.776193       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	I0528 21:10:38.780285       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:10:38.780371       1 main.go:227] handling current node
	I0528 21:10:38.780395       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:10:38.780417       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:10:38.780541       1 main.go:223] Handling node with IPs: map[192.168.39.154:{}]
	I0528 21:10:38.780562       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	I0528 21:10:48.794630       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:10:48.794715       1 main.go:227] handling current node
	I0528 21:10:48.794740       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:10:48.794756       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:10:48.794878       1 main.go:223] Handling node with IPs: map[192.168.39.154:{}]
	I0528 21:10:48.794898       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	I0528 21:10:58.807105       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:10:58.807192       1 main.go:227] handling current node
	I0528 21:10:58.807290       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:10:58.807308       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:10:58.807425       1 main.go:223] Handling node with IPs: map[192.168.39.154:{}]
	I0528 21:10:58.807444       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	I0528 21:11:08.817281       1 main.go:223] Handling node with IPs: map[192.168.39.65:{}]
	I0528 21:11:08.817418       1 main.go:227] handling current node
	I0528 21:11:08.817491       1 main.go:223] Handling node with IPs: map[192.168.39.98:{}]
	I0528 21:11:08.817524       1 main.go:250] Node multinode-869191-m02 has CIDR [10.244.1.0/24] 
	I0528 21:11:08.817639       1 main.go:223] Handling node with IPs: map[192.168.39.154:{}]
	I0528 21:11:08.817659       1 main.go:250] Node multinode-869191-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4952f4946567c8b4ade9afce440b96de146c574862dacb82f4f7c0e06e1b25d0] <==
	I0528 21:11:18.442928       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0528 21:11:18.438798       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0528 21:11:18.439125       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0528 21:11:18.439169       1 controller.go:157] Shutting down quota evaluator
	I0528 21:11:18.443498       1 controller.go:176] quota evaluator worker shutdown
	I0528 21:11:18.439570       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0528 21:11:18.443582       1 controller.go:176] quota evaluator worker shutdown
	I0528 21:11:18.443605       1 controller.go:176] quota evaluator worker shutdown
	I0528 21:11:18.443626       1 controller.go:176] quota evaluator worker shutdown
	I0528 21:11:18.443648       1 controller.go:176] quota evaluator worker shutdown
	I0528 21:11:18.446130       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0528 21:11:18.451052       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0528 21:11:18.456462       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457089       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457186       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457373       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457430       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457480       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457532       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457590       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457642       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457693       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457744       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457801       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:11:18.457861       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fff07e29b61aaaf721af4f63571b7d45d059365093210ebd0a2ce382f0244b5b] <==
	I0528 21:12:58.175289       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0528 21:12:58.181608       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 21:12:58.181639       1 policy_source.go:224] refreshing policies
	I0528 21:12:58.182766       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0528 21:12:58.211042       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0528 21:12:58.211259       1 aggregator.go:165] initial CRD sync complete...
	I0528 21:12:58.211291       1 autoregister_controller.go:141] Starting autoregister controller
	I0528 21:12:58.211315       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0528 21:12:58.211337       1 cache.go:39] Caches are synced for autoregister controller
	I0528 21:12:58.267580       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0528 21:12:58.267655       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0528 21:12:58.267762       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0528 21:12:58.268961       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0528 21:12:58.269172       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0528 21:12:58.271307       1 shared_informer.go:320] Caches are synced for configmaps
	I0528 21:12:58.279171       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0528 21:12:58.293594       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0528 21:12:59.085034       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0528 21:13:00.237130       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0528 21:13:00.364439       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0528 21:13:00.375833       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0528 21:13:00.450903       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0528 21:13:00.457094       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0528 21:13:10.933150       1 controller.go:615] quota admission added evaluator for: endpoints
	I0528 21:13:10.989135       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [067e3bfe9287fedd95a060e3d4b3b951427c0cb001493040d415db25f3f47d65] <==
	I0528 21:13:38.744953       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-869191-m02" podCIDRs=["10.244.1.0/24"]
	I0528 21:13:40.616988       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.799µs"
	I0528 21:13:40.653597       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.271µs"
	I0528 21:13:40.662162       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.326µs"
	I0528 21:13:40.676488       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.071µs"
	I0528 21:13:40.680685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.715µs"
	I0528 21:13:40.682757       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.61µs"
	I0528 21:13:41.100322       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.908µs"
	I0528 21:13:48.290359       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:13:48.312846       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.339µs"
	I0528 21:13:48.325940       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.248µs"
	I0528 21:13:52.135754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.118332ms"
	I0528 21:13:52.136249       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="133.386µs"
	I0528 21:14:06.367150       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:14:07.574497       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-869191-m03\" does not exist"
	I0528 21:14:07.574564       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:14:07.585133       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-869191-m03" podCIDRs=["10.244.2.0/24"]
	I0528 21:14:15.683489       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:14:21.235306       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:15:01.064413       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.621579ms"
	I0528 21:15:01.064716       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.97µs"
	I0528 21:15:10.926782       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-z5bd7"
	I0528 21:15:10.949538       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-z5bd7"
	I0528 21:15:10.949614       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-vw26c"
	I0528 21:15:10.970052       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-vw26c"
	
	
	==> kube-controller-manager [e2197d4ac3e76cff0d93c083bdceedb63867452c1b621ea1537065ca628fedb9] <==
	I0528 21:07:37.002764       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-869191-m02" podCIDRs=["10.244.1.0/24"]
	I0528 21:07:37.627379       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-869191-m02"
	I0528 21:07:46.249967       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:07:48.598882       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.168703ms"
	I0528 21:07:48.618369       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.427886ms"
	I0528 21:07:48.618540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.25µs"
	I0528 21:07:48.620903       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.239µs"
	I0528 21:07:48.630357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.275µs"
	I0528 21:07:52.484183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.363408ms"
	I0528 21:07:52.484573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.883µs"
	I0528 21:07:52.799148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.140135ms"
	I0528 21:07:52.799608       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.684µs"
	I0528 21:08:24.911318       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:08:24.912304       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-869191-m03\" does not exist"
	I0528 21:08:24.924294       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-869191-m03" podCIDRs=["10.244.2.0/24"]
	I0528 21:08:27.650173       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-869191-m03"
	I0528 21:08:34.768573       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:09:02.886526       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:09:03.891717       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-869191-m03\" does not exist"
	I0528 21:09:03.891917       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:09:03.903192       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-869191-m03" podCIDRs=["10.244.3.0/24"]
	I0528 21:09:12.763707       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m02"
	I0528 21:09:52.699606       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-869191-m03"
	I0528 21:09:52.748324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.959311ms"
	I0528 21:09:52.748628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.126µs"
	
	
	==> kube-proxy [6025504364d6efd0fffcdbebe00d742ff98eed2d58ab5e78e9ee8563f959b7ac] <==
	I0528 21:07:05.559517       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:07:05.572942       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.65"]
	I0528 21:07:05.614660       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 21:07:05.614705       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 21:07:05.614719       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:07:05.617550       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:07:05.617824       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:07:05.617870       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:07:05.619455       1 config.go:192] "Starting service config controller"
	I0528 21:07:05.619508       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:07:05.619551       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:07:05.619568       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:07:05.620149       1 config.go:319] "Starting node config controller"
	I0528 21:07:05.620260       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:07:05.720150       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 21:07:05.720259       1 shared_informer.go:320] Caches are synced for service config
	I0528 21:07:05.720355       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [dc094a6daa47a8046b8497314256954e60b46b95bd2991e15338af3c3e6a9ae7] <==
	I0528 21:12:59.075059       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:12:59.104371       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.65"]
	I0528 21:12:59.195420       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 21:12:59.195485       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 21:12:59.195503       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:12:59.206311       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:12:59.206509       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:12:59.206524       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:12:59.210832       1 config.go:192] "Starting service config controller"
	I0528 21:12:59.210868       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:12:59.210888       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:12:59.210892       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:12:59.211293       1 config.go:319] "Starting node config controller"
	I0528 21:12:59.211299       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:12:59.311908       1 shared_informer.go:320] Caches are synced for node config
	I0528 21:12:59.311953       1 shared_informer.go:320] Caches are synced for service config
	I0528 21:12:59.311980       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1aa37e66c157492eb9bc9a49f3f4e20ee4a7ce5bcf6f35057d414967b79bb458] <==
	W0528 21:06:47.608292       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 21:06:47.608321       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 21:06:47.620383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 21:06:47.620483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 21:06:47.631881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 21:06:47.631954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0528 21:06:47.682435       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 21:06:47.682558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0528 21:06:47.737261       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 21:06:47.737291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0528 21:06:47.875486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 21:06:47.875603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 21:06:47.890467       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 21:06:47.890557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 21:06:47.897674       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 21:06:47.897827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0528 21:06:47.992863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0528 21:06:47.992911       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0528 21:06:48.215145       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 21:06:48.215191       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0528 21:06:50.250440       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 21:11:18.414139       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0528 21:11:18.414415       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0528 21:11:18.414753       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0528 21:11:18.415737       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b852ca44def88ca8173aed69919592580b7bd4d84c154208ea11acd9e2737eb9] <==
	I0528 21:12:55.833541       1 serving.go:380] Generated self-signed cert in-memory
	W0528 21:12:58.130847       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 21:12:58.130950       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:12:58.130979       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 21:12:58.131008       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 21:12:58.185388       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 21:12:58.185431       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:12:58.190384       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0528 21:12:58.192310       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0528 21:12:58.192379       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 21:12:58.192425       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 21:12:58.293266       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.248787    3102 topology_manager.go:215] "Topology Admit Handler" podUID="f8887a9a-26fd-42dd-b3c5-9ff88f628dae" podNamespace="default" podName="busybox-fc5497c4f-qqxb7"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.259660    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/59c6483f-f65f-490c-8b1e-7b0b425a80cf-cni-cfg\") pod \"kindnet-24k26\" (UID: \"59c6483f-f65f-490c-8b1e-7b0b425a80cf\") " pod="kube-system/kindnet-24k26"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.259703    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/29c00081-275d-4209-bf8a-74849ccf882c-tmp\") pod \"storage-provisioner\" (UID: \"29c00081-275d-4209-bf8a-74849ccf882c\") " pod="kube-system/storage-provisioner"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.259738    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59c6483f-f65f-490c-8b1e-7b0b425a80cf-xtables-lock\") pod \"kindnet-24k26\" (UID: \"59c6483f-f65f-490c-8b1e-7b0b425a80cf\") " pod="kube-system/kindnet-24k26"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.259762    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59c6483f-f65f-490c-8b1e-7b0b425a80cf-lib-modules\") pod \"kindnet-24k26\" (UID: \"59c6483f-f65f-490c-8b1e-7b0b425a80cf\") " pod="kube-system/kindnet-24k26"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.260004    3102 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.360163    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9619acba-a019-4080-8c86-f63e7ce399bb-lib-modules\") pod \"kube-proxy-sj7k8\" (UID: \"9619acba-a019-4080-8c86-f63e7ce399bb\") " pod="kube-system/kube-proxy-sj7k8"
	May 28 21:12:58 multinode-869191 kubelet[3102]: I0528 21:12:58.360369    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9619acba-a019-4080-8c86-f63e7ce399bb-xtables-lock\") pod \"kube-proxy-sj7k8\" (UID: \"9619acba-a019-4080-8c86-f63e7ce399bb\") " pod="kube-system/kube-proxy-sj7k8"
	May 28 21:13:00 multinode-869191 kubelet[3102]: I0528 21:13:00.417763    3102 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 28 21:13:08 multinode-869191 kubelet[3102]: I0528 21:13:08.259869    3102 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 28 21:13:54 multinode-869191 kubelet[3102]: E0528 21:13:54.347430    3102 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 21:13:54 multinode-869191 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 21:13:54 multinode-869191 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 21:13:54 multinode-869191 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 21:13:54 multinode-869191 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 21:14:54 multinode-869191 kubelet[3102]: E0528 21:14:54.336160    3102 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 21:14:54 multinode-869191 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 21:14:54 multinode-869191 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 21:14:54 multinode-869191 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 21:14:54 multinode-869191 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 21:15:54 multinode-869191 kubelet[3102]: E0528 21:15:54.337356    3102 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 21:15:54 multinode-869191 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 21:15:54 multinode-869191 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 21:15:54 multinode-869191 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 21:15:54 multinode-869191 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:16:42.168568   42153 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18966-3963/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-869191 -n multinode-869191
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-869191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.36s)

                                                
                                    
x
+
TestPreload (250.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-285104 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0528 21:22:37.450762   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-285104 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m45.254613254s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-285104 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-285104 image pull gcr.io/k8s-minikube/busybox: (2.931001804s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-285104
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-285104: (7.387663522s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-285104 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0528 21:24:25.645820   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 21:24:42.597918   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-285104 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m12.371297412s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-285104 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-05-28 21:24:43.988303839 +0000 UTC m=+3814.744382441
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-285104 -n test-preload-285104
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-285104 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-285104 logs -n 25: (1.040788564s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n multinode-869191 sudo cat                                       | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | /home/docker/cp-test_multinode-869191-m03_multinode-869191.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-869191 cp multinode-869191-m03:/home/docker/cp-test.txt                       | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m02:/home/docker/cp-test_multinode-869191-m03_multinode-869191-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n                                                                 | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | multinode-869191-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-869191 ssh -n multinode-869191-m02 sudo cat                                   | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	|         | /home/docker/cp-test_multinode-869191-m03_multinode-869191-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-869191 node stop m03                                                          | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:08 UTC |
	| node    | multinode-869191 node start                                                             | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:08 UTC | 28 May 24 21:09 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-869191                                                                | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:09 UTC |                     |
	| stop    | -p multinode-869191                                                                     | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:09 UTC |                     |
	| start   | -p multinode-869191                                                                     | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:11 UTC | 28 May 24 21:14 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-869191                                                                | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:14 UTC |                     |
	| node    | multinode-869191 node delete                                                            | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:14 UTC | 28 May 24 21:14 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-869191 stop                                                                   | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:14 UTC |                     |
	| start   | -p multinode-869191                                                                     | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:16 UTC | 28 May 24 21:19 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-869191                                                                | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:19 UTC |                     |
	| start   | -p multinode-869191-m02                                                                 | multinode-869191-m02 | jenkins | v1.33.1 | 28 May 24 21:19 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-869191-m03                                                                 | multinode-869191-m03 | jenkins | v1.33.1 | 28 May 24 21:19 UTC | 28 May 24 21:20 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-869191                                                                 | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:20 UTC |                     |
	| delete  | -p multinode-869191-m03                                                                 | multinode-869191-m03 | jenkins | v1.33.1 | 28 May 24 21:20 UTC | 28 May 24 21:20 UTC |
	| delete  | -p multinode-869191                                                                     | multinode-869191     | jenkins | v1.33.1 | 28 May 24 21:20 UTC | 28 May 24 21:20 UTC |
	| start   | -p test-preload-285104                                                                  | test-preload-285104  | jenkins | v1.33.1 | 28 May 24 21:20 UTC | 28 May 24 21:23 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-285104 image pull                                                          | test-preload-285104  | jenkins | v1.33.1 | 28 May 24 21:23 UTC | 28 May 24 21:23 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-285104                                                                  | test-preload-285104  | jenkins | v1.33.1 | 28 May 24 21:23 UTC | 28 May 24 21:23 UTC |
	| start   | -p test-preload-285104                                                                  | test-preload-285104  | jenkins | v1.33.1 | 28 May 24 21:23 UTC | 28 May 24 21:24 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-285104 image list                                                          | test-preload-285104  | jenkins | v1.33.1 | 28 May 24 21:24 UTC | 28 May 24 21:24 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:23:31
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:23:31.445908   45293 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:23:31.446136   45293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:23:31.446145   45293 out.go:304] Setting ErrFile to fd 2...
	I0528 21:23:31.446150   45293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:23:31.446337   45293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:23:31.446793   45293 out.go:298] Setting JSON to false
	I0528 21:23:31.447623   45293 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3954,"bootTime":1716927457,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:23:31.447673   45293 start.go:139] virtualization: kvm guest
	I0528 21:23:31.449821   45293 out.go:177] * [test-preload-285104] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:23:31.450971   45293 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:23:31.452084   45293 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:23:31.450950   45293 notify.go:220] Checking for updates...
	I0528 21:23:31.454350   45293 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:23:31.455477   45293 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:23:31.456538   45293 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:23:31.457626   45293 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:23:31.459088   45293 config.go:182] Loaded profile config "test-preload-285104": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0528 21:23:31.459487   45293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:23:31.459542   45293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:23:31.473670   45293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32883
	I0528 21:23:31.474018   45293 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:23:31.474476   45293 main.go:141] libmachine: Using API Version  1
	I0528 21:23:31.474498   45293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:23:31.474861   45293 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:23:31.475046   45293 main.go:141] libmachine: (test-preload-285104) Calling .DriverName
	I0528 21:23:31.476591   45293 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0528 21:23:31.477612   45293 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:23:31.477909   45293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:23:31.477941   45293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:23:31.491535   45293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43245
	I0528 21:23:31.491834   45293 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:23:31.492198   45293 main.go:141] libmachine: Using API Version  1
	I0528 21:23:31.492222   45293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:23:31.492498   45293 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:23:31.492630   45293 main.go:141] libmachine: (test-preload-285104) Calling .DriverName
	I0528 21:23:31.524691   45293 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:23:31.525879   45293 start.go:297] selected driver: kvm2
	I0528 21:23:31.525890   45293 start.go:901] validating driver "kvm2" against &{Name:test-preload-285104 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-285104 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:23:31.525966   45293 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:23:31.526584   45293 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:23:31.526634   45293 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:23:31.539819   45293 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:23:31.540091   45293 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:23:31.540141   45293 cni.go:84] Creating CNI manager for ""
	I0528 21:23:31.540153   45293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:23:31.540200   45293 start.go:340] cluster config:
	{Name:test-preload-285104 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-285104 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:23:31.540277   45293 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:23:31.541889   45293 out.go:177] * Starting "test-preload-285104" primary control-plane node in "test-preload-285104" cluster
	I0528 21:23:31.542994   45293 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0528 21:23:32.017230   45293 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0528 21:23:32.017264   45293 cache.go:56] Caching tarball of preloaded images
	I0528 21:23:32.017416   45293 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0528 21:23:32.019283   45293 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0528 21:23:32.020569   45293 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0528 21:23:32.130786   45293 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0528 21:23:44.834436   45293 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0528 21:23:44.834524   45293 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0528 21:23:45.672436   45293 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0528 21:23:45.672554   45293 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104/config.json ...
	I0528 21:23:45.672772   45293 start.go:360] acquireMachinesLock for test-preload-285104: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:23:45.672827   45293 start.go:364] duration metric: took 35.882µs to acquireMachinesLock for "test-preload-285104"
	I0528 21:23:45.672841   45293 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:23:45.672850   45293 fix.go:54] fixHost starting: 
	I0528 21:23:45.673144   45293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:23:45.673176   45293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:23:45.687346   45293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46309
	I0528 21:23:45.687776   45293 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:23:45.688270   45293 main.go:141] libmachine: Using API Version  1
	I0528 21:23:45.688294   45293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:23:45.688655   45293 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:23:45.688843   45293 main.go:141] libmachine: (test-preload-285104) Calling .DriverName
	I0528 21:23:45.689025   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetState
	I0528 21:23:45.690611   45293 fix.go:112] recreateIfNeeded on test-preload-285104: state=Stopped err=<nil>
	I0528 21:23:45.690633   45293 main.go:141] libmachine: (test-preload-285104) Calling .DriverName
	W0528 21:23:45.690821   45293 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:23:45.693123   45293 out.go:177] * Restarting existing kvm2 VM for "test-preload-285104" ...
	I0528 21:23:45.694331   45293 main.go:141] libmachine: (test-preload-285104) Calling .Start
	I0528 21:23:45.694481   45293 main.go:141] libmachine: (test-preload-285104) Ensuring networks are active...
	I0528 21:23:45.695152   45293 main.go:141] libmachine: (test-preload-285104) Ensuring network default is active
	I0528 21:23:45.695465   45293 main.go:141] libmachine: (test-preload-285104) Ensuring network mk-test-preload-285104 is active
	I0528 21:23:45.695842   45293 main.go:141] libmachine: (test-preload-285104) Getting domain xml...
	I0528 21:23:45.696529   45293 main.go:141] libmachine: (test-preload-285104) Creating domain...
	I0528 21:23:46.870848   45293 main.go:141] libmachine: (test-preload-285104) Waiting to get IP...
	I0528 21:23:46.871605   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:23:46.871937   45293 main.go:141] libmachine: (test-preload-285104) DBG | unable to find current IP address of domain test-preload-285104 in network mk-test-preload-285104
	I0528 21:23:46.872005   45293 main.go:141] libmachine: (test-preload-285104) DBG | I0528 21:23:46.871930   45376 retry.go:31] will retry after 196.066302ms: waiting for machine to come up
	I0528 21:23:47.069365   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:23:47.069825   45293 main.go:141] libmachine: (test-preload-285104) DBG | unable to find current IP address of domain test-preload-285104 in network mk-test-preload-285104
	I0528 21:23:47.069846   45293 main.go:141] libmachine: (test-preload-285104) DBG | I0528 21:23:47.069773   45376 retry.go:31] will retry after 336.546226ms: waiting for machine to come up
	I0528 21:23:47.408260   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:23:47.408726   45293 main.go:141] libmachine: (test-preload-285104) DBG | unable to find current IP address of domain test-preload-285104 in network mk-test-preload-285104
	I0528 21:23:47.408761   45293 main.go:141] libmachine: (test-preload-285104) DBG | I0528 21:23:47.408655   45376 retry.go:31] will retry after 313.614458ms: waiting for machine to come up
	I0528 21:23:47.724169   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:23:47.724648   45293 main.go:141] libmachine: (test-preload-285104) DBG | unable to find current IP address of domain test-preload-285104 in network mk-test-preload-285104
	I0528 21:23:47.724674   45293 main.go:141] libmachine: (test-preload-285104) DBG | I0528 21:23:47.724618   45376 retry.go:31] will retry after 451.279685ms: waiting for machine to come up
	I0528 21:23:48.177274   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:23:48.177838   45293 main.go:141] libmachine: (test-preload-285104) DBG | unable to find current IP address of domain test-preload-285104 in network mk-test-preload-285104
	I0528 21:23:48.177886   45293 main.go:141] libmachine: (test-preload-285104) DBG | I0528 21:23:48.177776   45376 retry.go:31] will retry after 534.615684ms: waiting for machine to come up
	I0528 21:23:48.714489   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:23:48.715093   45293 main.go:141] libmachine: (test-preload-285104) DBG | unable to find current IP address of domain test-preload-285104 in network mk-test-preload-285104
	I0528 21:23:48.715120   45293 main.go:141] libmachine: (test-preload-285104) DBG | I0528 21:23:48.715031   45376 retry.go:31] will retry after 623.99554ms: waiting for machine to come up
	I0528 21:23:49.340865   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:23:49.341215   45293 main.go:141] libmachine: (test-preload-285104) DBG | unable to find current IP address of domain test-preload-285104 in network mk-test-preload-285104
	I0528 21:23:49.341245   45293 main.go:141] libmachine: (test-preload-285104) DBG | I0528 21:23:49.341159   45376 retry.go:31] will retry after 900.873782ms: waiting for machine to come up
	I0528 21:23:50.243081   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:23:50.243544   45293 main.go:141] libmachine: (test-preload-285104) DBG | unable to find current IP address of domain test-preload-285104 in network mk-test-preload-285104
	I0528 21:23:50.243571   45293 main.go:141] libmachine: (test-preload-285104) DBG | I0528 21:23:50.243490   45376 retry.go:31] will retry after 1.03995195s: waiting for machine to come up
	I0528 21:23:51.285338   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:23:51.285777   45293 main.go:141] libmachine: (test-preload-285104) DBG | unable to find current IP address of domain test-preload-285104 in network mk-test-preload-285104
	I0528 21:23:51.285806   45293 main.go:141] libmachine: (test-preload-285104) DBG | I0528 21:23:51.285725   45376 retry.go:31] will retry after 1.579820966s: waiting for machine to come up
	I0528 21:23:52.867521   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:23:52.867974   45293 main.go:141] libmachine: (test-preload-285104) DBG | unable to find current IP address of domain test-preload-285104 in network mk-test-preload-285104
	I0528 21:23:52.868003   45293 main.go:141] libmachine: (test-preload-285104) DBG | I0528 21:23:52.867913   45376 retry.go:31] will retry after 1.511168608s: waiting for machine to come up
	I0528 21:23:54.381631   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:23:54.381932   45293 main.go:141] libmachine: (test-preload-285104) DBG | unable to find current IP address of domain test-preload-285104 in network mk-test-preload-285104
	I0528 21:23:54.381967   45293 main.go:141] libmachine: (test-preload-285104) DBG | I0528 21:23:54.381869   45376 retry.go:31] will retry after 2.831764485s: waiting for machine to come up
	I0528 21:23:57.214926   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:23:57.215375   45293 main.go:141] libmachine: (test-preload-285104) DBG | unable to find current IP address of domain test-preload-285104 in network mk-test-preload-285104
	I0528 21:23:57.215399   45293 main.go:141] libmachine: (test-preload-285104) DBG | I0528 21:23:57.215342   45376 retry.go:31] will retry after 3.132090651s: waiting for machine to come up
	I0528 21:24:00.351537   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:00.351936   45293 main.go:141] libmachine: (test-preload-285104) DBG | unable to find current IP address of domain test-preload-285104 in network mk-test-preload-285104
	I0528 21:24:00.351964   45293 main.go:141] libmachine: (test-preload-285104) DBG | I0528 21:24:00.351898   45376 retry.go:31] will retry after 4.530995094s: waiting for machine to come up
	I0528 21:24:04.887906   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:04.888377   45293 main.go:141] libmachine: (test-preload-285104) Found IP for machine: 192.168.39.188
	I0528 21:24:04.888416   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has current primary IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:04.888426   45293 main.go:141] libmachine: (test-preload-285104) Reserving static IP address...
	I0528 21:24:04.888803   45293 main.go:141] libmachine: (test-preload-285104) Reserved static IP address: 192.168.39.188
	I0528 21:24:04.888838   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "test-preload-285104", mac: "52:54:00:7b:83:e0", ip: "192.168.39.188"} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:04.888852   45293 main.go:141] libmachine: (test-preload-285104) Waiting for SSH to be available...
	I0528 21:24:04.888879   45293 main.go:141] libmachine: (test-preload-285104) DBG | skip adding static IP to network mk-test-preload-285104 - found existing host DHCP lease matching {name: "test-preload-285104", mac: "52:54:00:7b:83:e0", ip: "192.168.39.188"}
	I0528 21:24:04.888893   45293 main.go:141] libmachine: (test-preload-285104) DBG | Getting to WaitForSSH function...
	I0528 21:24:04.891087   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:04.891450   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:04.891484   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:04.891593   45293 main.go:141] libmachine: (test-preload-285104) DBG | Using SSH client type: external
	I0528 21:24:04.891620   45293 main.go:141] libmachine: (test-preload-285104) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/test-preload-285104/id_rsa (-rw-------)
	I0528 21:24:04.891654   45293 main.go:141] libmachine: (test-preload-285104) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/test-preload-285104/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 21:24:04.891673   45293 main.go:141] libmachine: (test-preload-285104) DBG | About to run SSH command:
	I0528 21:24:04.891686   45293 main.go:141] libmachine: (test-preload-285104) DBG | exit 0
	I0528 21:24:05.021708   45293 main.go:141] libmachine: (test-preload-285104) DBG | SSH cmd err, output: <nil>: 
	I0528 21:24:05.022099   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetConfigRaw
	I0528 21:24:05.022722   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetIP
	I0528 21:24:05.025000   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.025328   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:05.025356   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.025675   45293 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104/config.json ...
	I0528 21:24:05.025872   45293 machine.go:94] provisionDockerMachine start ...
	I0528 21:24:05.025890   45293 main.go:141] libmachine: (test-preload-285104) Calling .DriverName
	I0528 21:24:05.026123   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHHostname
	I0528 21:24:05.028255   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.028578   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:05.028616   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.028723   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHPort
	I0528 21:24:05.028897   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:05.029066   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:05.029201   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHUsername
	I0528 21:24:05.029392   45293 main.go:141] libmachine: Using SSH client type: native
	I0528 21:24:05.029582   45293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0528 21:24:05.029595   45293 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:24:05.142321   45293 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 21:24:05.142352   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetMachineName
	I0528 21:24:05.142612   45293 buildroot.go:166] provisioning hostname "test-preload-285104"
	I0528 21:24:05.142641   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetMachineName
	I0528 21:24:05.142845   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHHostname
	I0528 21:24:05.145566   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.145899   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:05.145918   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.146222   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHPort
	I0528 21:24:05.146395   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:05.146594   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:05.146730   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHUsername
	I0528 21:24:05.146930   45293 main.go:141] libmachine: Using SSH client type: native
	I0528 21:24:05.147098   45293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0528 21:24:05.147109   45293 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-285104 && echo "test-preload-285104" | sudo tee /etc/hostname
	I0528 21:24:05.280084   45293 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-285104
	
	I0528 21:24:05.280129   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHHostname
	I0528 21:24:05.283043   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.283365   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:05.283395   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.283542   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHPort
	I0528 21:24:05.283738   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:05.283965   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:05.284097   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHUsername
	I0528 21:24:05.284280   45293 main.go:141] libmachine: Using SSH client type: native
	I0528 21:24:05.284440   45293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0528 21:24:05.284456   45293 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-285104' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-285104/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-285104' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:24:05.410463   45293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:24:05.410502   45293 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:24:05.410563   45293 buildroot.go:174] setting up certificates
	I0528 21:24:05.410577   45293 provision.go:84] configureAuth start
	I0528 21:24:05.410591   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetMachineName
	I0528 21:24:05.410869   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetIP
	I0528 21:24:05.413572   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.413998   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:05.414039   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.414466   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHHostname
	I0528 21:24:05.416932   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.417256   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:05.417306   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.417436   45293 provision.go:143] copyHostCerts
	I0528 21:24:05.417488   45293 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:24:05.417505   45293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:24:05.417578   45293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:24:05.417664   45293 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:24:05.417671   45293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:24:05.417695   45293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:24:05.417745   45293 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:24:05.417752   45293 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:24:05.417799   45293 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:24:05.417894   45293 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.test-preload-285104 san=[127.0.0.1 192.168.39.188 localhost minikube test-preload-285104]
	I0528 21:24:05.668694   45293 provision.go:177] copyRemoteCerts
	I0528 21:24:05.668751   45293 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:24:05.668778   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHHostname
	I0528 21:24:05.671464   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.671781   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:05.671812   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.671927   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHPort
	I0528 21:24:05.672133   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:05.672277   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHUsername
	I0528 21:24:05.672411   45293 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/test-preload-285104/id_rsa Username:docker}
	I0528 21:24:05.760118   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:24:05.784378   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0528 21:24:05.807367   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 21:24:05.830460   45293 provision.go:87] duration metric: took 419.873724ms to configureAuth
	I0528 21:24:05.830482   45293 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:24:05.830677   45293 config.go:182] Loaded profile config "test-preload-285104": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0528 21:24:05.830756   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHHostname
	I0528 21:24:05.833414   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.833804   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:05.833836   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:05.834006   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHPort
	I0528 21:24:05.834191   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:05.834352   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:05.834497   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHUsername
	I0528 21:24:05.834661   45293 main.go:141] libmachine: Using SSH client type: native
	I0528 21:24:05.834850   45293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0528 21:24:05.834866   45293 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:24:06.121460   45293 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:24:06.121484   45293 machine.go:97] duration metric: took 1.095599245s to provisionDockerMachine
	I0528 21:24:06.121497   45293 start.go:293] postStartSetup for "test-preload-285104" (driver="kvm2")
	I0528 21:24:06.121511   45293 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:24:06.121534   45293 main.go:141] libmachine: (test-preload-285104) Calling .DriverName
	I0528 21:24:06.121827   45293 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:24:06.121855   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHHostname
	I0528 21:24:06.124336   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:06.124675   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:06.124695   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:06.124808   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHPort
	I0528 21:24:06.124988   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:06.125164   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHUsername
	I0528 21:24:06.125322   45293 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/test-preload-285104/id_rsa Username:docker}
	I0528 21:24:06.212779   45293 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:24:06.216936   45293 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 21:24:06.216955   45293 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:24:06.217010   45293 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:24:06.217082   45293 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:24:06.217166   45293 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:24:06.226006   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:24:06.248711   45293 start.go:296] duration metric: took 127.203618ms for postStartSetup
	I0528 21:24:06.248737   45293 fix.go:56] duration metric: took 20.575887933s for fixHost
	I0528 21:24:06.248754   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHHostname
	I0528 21:24:06.251184   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:06.251493   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:06.251515   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:06.251685   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHPort
	I0528 21:24:06.251869   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:06.252032   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:06.252159   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHUsername
	I0528 21:24:06.252290   45293 main.go:141] libmachine: Using SSH client type: native
	I0528 21:24:06.252437   45293 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I0528 21:24:06.252450   45293 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 21:24:06.361954   45293 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716931446.339553207
	
	I0528 21:24:06.361970   45293 fix.go:216] guest clock: 1716931446.339553207
	I0528 21:24:06.361977   45293 fix.go:229] Guest: 2024-05-28 21:24:06.339553207 +0000 UTC Remote: 2024-05-28 21:24:06.248740854 +0000 UTC m=+34.836018236 (delta=90.812353ms)
	I0528 21:24:06.362012   45293 fix.go:200] guest clock delta is within tolerance: 90.812353ms
	I0528 21:24:06.362017   45293 start.go:83] releasing machines lock for "test-preload-285104", held for 20.689180578s
	I0528 21:24:06.362035   45293 main.go:141] libmachine: (test-preload-285104) Calling .DriverName
	I0528 21:24:06.362281   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetIP
	I0528 21:24:06.364564   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:06.364916   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:06.364945   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:06.365078   45293 main.go:141] libmachine: (test-preload-285104) Calling .DriverName
	I0528 21:24:06.365504   45293 main.go:141] libmachine: (test-preload-285104) Calling .DriverName
	I0528 21:24:06.365678   45293 main.go:141] libmachine: (test-preload-285104) Calling .DriverName
	I0528 21:24:06.365815   45293 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:24:06.365864   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHHostname
	I0528 21:24:06.365954   45293 ssh_runner.go:195] Run: cat /version.json
	I0528 21:24:06.365972   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHHostname
	I0528 21:24:06.368459   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:06.368717   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:06.368769   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:06.368793   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:06.368891   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHPort
	I0528 21:24:06.369057   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:06.369133   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:06.369160   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:06.369197   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHUsername
	I0528 21:24:06.369306   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHPort
	I0528 21:24:06.369367   45293 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/test-preload-285104/id_rsa Username:docker}
	I0528 21:24:06.369452   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:06.369582   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHUsername
	I0528 21:24:06.369718   45293 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/test-preload-285104/id_rsa Username:docker}
	I0528 21:24:06.473236   45293 ssh_runner.go:195] Run: systemctl --version
	I0528 21:24:06.479012   45293 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:24:06.625995   45293 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 21:24:06.632215   45293 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:24:06.632282   45293 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:24:06.649282   45293 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 21:24:06.649320   45293 start.go:494] detecting cgroup driver to use...
	I0528 21:24:06.649381   45293 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:24:06.664999   45293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:24:06.678269   45293 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:24:06.678309   45293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:24:06.691049   45293 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:24:06.704040   45293 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:24:06.816955   45293 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:24:06.972087   45293 docker.go:233] disabling docker service ...
	I0528 21:24:06.972160   45293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:24:06.985650   45293 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:24:06.997682   45293 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:24:07.117475   45293 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:24:07.232247   45293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:24:07.245857   45293 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:24:07.263613   45293 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0528 21:24:07.263666   45293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:24:07.273520   45293 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:24:07.273572   45293 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:24:07.283319   45293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:24:07.293090   45293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:24:07.303043   45293 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:24:07.313307   45293 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:24:07.322978   45293 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:24:07.339308   45293 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:24:07.349032   45293 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:24:07.357816   45293 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 21:24:07.357861   45293 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 21:24:07.370043   45293 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:24:07.379168   45293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:24:07.490981   45293 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:24:07.623567   45293 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:24:07.623676   45293 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:24:07.628782   45293 start.go:562] Will wait 60s for crictl version
	I0528 21:24:07.628829   45293 ssh_runner.go:195] Run: which crictl
	I0528 21:24:07.632724   45293 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:24:07.670584   45293 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 21:24:07.670658   45293 ssh_runner.go:195] Run: crio --version
	I0528 21:24:07.701136   45293 ssh_runner.go:195] Run: crio --version
	I0528 21:24:07.729153   45293 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0528 21:24:07.730667   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetIP
	I0528 21:24:07.733330   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:07.733662   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:07.733697   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:07.733889   45293 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 21:24:07.738022   45293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:24:07.750700   45293 kubeadm.go:877] updating cluster {Name:test-preload-285104 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-285104 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:24:07.750840   45293 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0528 21:24:07.750888   45293 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:24:07.787120   45293 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0528 21:24:07.787171   45293 ssh_runner.go:195] Run: which lz4
	I0528 21:24:07.790989   45293 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 21:24:07.795294   45293 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 21:24:07.795318   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0528 21:24:09.357643   45293 crio.go:462] duration metric: took 1.566683167s to copy over tarball
	I0528 21:24:09.357715   45293 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 21:24:11.685054   45293 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.327305208s)
	I0528 21:24:11.685092   45293 crio.go:469] duration metric: took 2.327421511s to extract the tarball
	I0528 21:24:11.685102   45293 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 21:24:11.726720   45293 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:24:11.767774   45293 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0528 21:24:11.767796   45293 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0528 21:24:11.767847   45293 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:24:11.767868   45293 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0528 21:24:11.767899   45293 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0528 21:24:11.767919   45293 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0528 21:24:11.767955   45293 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0528 21:24:11.767994   45293 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0528 21:24:11.767879   45293 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0528 21:24:11.768042   45293 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0528 21:24:11.769589   45293 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:24:11.769616   45293 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0528 21:24:11.769645   45293 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0528 21:24:11.769589   45293 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0528 21:24:11.769592   45293 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0528 21:24:11.769596   45293 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0528 21:24:11.769593   45293 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0528 21:24:11.769593   45293 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0528 21:24:11.907159   45293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0528 21:24:11.915104   45293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0528 21:24:11.938900   45293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0528 21:24:11.965158   45293 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0528 21:24:11.965194   45293 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0528 21:24:11.965235   45293 ssh_runner.go:195] Run: which crictl
	I0528 21:24:11.979319   45293 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0528 21:24:11.979359   45293 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0528 21:24:11.979410   45293 ssh_runner.go:195] Run: which crictl
	I0528 21:24:11.983120   45293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0528 21:24:12.009913   45293 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0528 21:24:12.009959   45293 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0528 21:24:12.009996   45293 ssh_runner.go:195] Run: which crictl
	I0528 21:24:12.009998   45293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0528 21:24:12.010007   45293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0528 21:24:12.018531   45293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0528 21:24:12.022805   45293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0528 21:24:12.042168   45293 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0528 21:24:12.042207   45293 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0528 21:24:12.042253   45293 ssh_runner.go:195] Run: which crictl
	I0528 21:24:12.069224   45293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0528 21:24:12.120003   45293 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0528 21:24:12.120056   45293 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0528 21:24:12.120110   45293 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0528 21:24:12.120127   45293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0528 21:24:12.120148   45293 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0528 21:24:12.168127   45293 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0528 21:24:12.168159   45293 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0528 21:24:12.168214   45293 ssh_runner.go:195] Run: which crictl
	I0528 21:24:12.168214   45293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0528 21:24:12.168270   45293 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0528 21:24:12.168302   45293 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0528 21:24:12.168306   45293 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0528 21:24:12.168322   45293 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0528 21:24:12.168341   45293 ssh_runner.go:195] Run: which crictl
	I0528 21:24:12.168344   45293 ssh_runner.go:195] Run: which crictl
	I0528 21:24:12.194411   45293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0528 21:24:12.194434   45293 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0528 21:24:12.194436   45293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0528 21:24:12.194475   45293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0528 21:24:12.194507   45293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0528 21:24:12.194521   45293 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0528 21:24:12.194616   45293 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0528 21:24:12.223964   45293 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0528 21:24:12.224022   45293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0528 21:24:12.224051   45293 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0528 21:24:12.224082   45293 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0528 21:24:12.224111   45293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0528 21:24:12.282945   45293 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0528 21:24:12.283063   45293 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0528 21:24:12.743585   45293 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:24:14.907475   45293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.712974334s)
	I0528 21:24:14.907511   45293 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0528 21:24:14.907545   45293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (2.683499479s)
	I0528 21:24:14.907566   45293 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0528 21:24:14.907583   45293 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0528 21:24:14.907584   45293 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (2.683518669s)
	I0528 21:24:14.907607   45293 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0528 21:24:14.907640   45293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0528 21:24:14.907678   45293 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0528 21:24:14.907689   45293 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.624609926s)
	I0528 21:24:14.907681   45293 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0528 21:24:14.907642   45293 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.683539383s)
	I0528 21:24:14.907714   45293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0528 21:24:14.907722   45293 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.164114671s)
	I0528 21:24:14.907727   45293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0528 21:24:14.913307   45293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0528 21:24:15.252849   45293 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0528 21:24:15.252886   45293 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0528 21:24:15.252903   45293 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0528 21:24:15.252936   45293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0528 21:24:15.999843   45293 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0528 21:24:15.999886   45293 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0528 21:24:15.999936   45293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0528 21:24:16.444594   45293 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0528 21:24:16.444636   45293 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0528 21:24:16.444683   45293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0528 21:24:17.290870   45293 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0528 21:24:17.290919   45293 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0528 21:24:17.290975   45293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0528 21:24:17.938883   45293 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0528 21:24:17.938934   45293 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0528 21:24:17.938990   45293 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0528 21:24:20.098352   45293 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.159331708s)
	I0528 21:24:20.098391   45293 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0528 21:24:20.098427   45293 cache_images.go:123] Successfully loaded all cached images
	I0528 21:24:20.098437   45293 cache_images.go:92] duration metric: took 8.330630219s to LoadCachedImages
	I0528 21:24:20.098448   45293 kubeadm.go:928] updating node { 192.168.39.188 8443 v1.24.4 crio true true} ...
	I0528 21:24:20.098603   45293 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-285104 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-285104 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:24:20.098687   45293 ssh_runner.go:195] Run: crio config
	I0528 21:24:20.147261   45293 cni.go:84] Creating CNI manager for ""
	I0528 21:24:20.147288   45293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:24:20.147304   45293 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:24:20.147328   45293 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.188 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-285104 NodeName:test-preload-285104 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 21:24:20.147490   45293 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-285104"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.188
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.188"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:24:20.147565   45293 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0528 21:24:20.157316   45293 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:24:20.157382   45293 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:24:20.166577   45293 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0528 21:24:20.182439   45293 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:24:20.197945   45293 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0528 21:24:20.214119   45293 ssh_runner.go:195] Run: grep 192.168.39.188	control-plane.minikube.internal$ /etc/hosts
	I0528 21:24:20.217858   45293 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:24:20.229501   45293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:24:20.348084   45293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:24:20.364660   45293 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104 for IP: 192.168.39.188
	I0528 21:24:20.364685   45293 certs.go:194] generating shared ca certs ...
	I0528 21:24:20.364706   45293 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:24:20.364882   45293 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 21:24:20.364944   45293 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 21:24:20.364958   45293 certs.go:256] generating profile certs ...
	I0528 21:24:20.365062   45293 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104/client.key
	I0528 21:24:20.365148   45293 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104/apiserver.key.68c6224a
	I0528 21:24:20.365200   45293 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104/proxy-client.key
	I0528 21:24:20.365357   45293 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 21:24:20.365401   45293 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 21:24:20.365425   45293 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 21:24:20.365457   45293 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 21:24:20.365486   45293 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:24:20.365522   45293 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 21:24:20.365566   45293 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:24:20.366285   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:24:20.411595   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:24:20.440186   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:24:20.466510   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:24:20.495734   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0528 21:24:20.520432   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 21:24:20.552454   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:24:20.588902   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 21:24:20.611819   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 21:24:20.633730   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 21:24:20.656145   45293 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:24:20.679630   45293 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:24:20.696907   45293 ssh_runner.go:195] Run: openssl version
	I0528 21:24:20.702566   45293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 21:24:20.712979   45293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 21:24:20.717408   45293 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:24:20.717453   45293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 21:24:20.723060   45293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 21:24:20.733272   45293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 21:24:20.743745   45293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 21:24:20.748089   45293 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:24:20.748139   45293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 21:24:20.753632   45293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 21:24:20.763985   45293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:24:20.774551   45293 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:24:20.778729   45293 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:24:20.778766   45293 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:24:20.783993   45293 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:24:20.794623   45293 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:24:20.798892   45293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 21:24:20.804626   45293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 21:24:20.810256   45293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 21:24:20.815964   45293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 21:24:20.821511   45293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 21:24:20.826986   45293 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 21:24:20.832443   45293 kubeadm.go:391] StartCluster: {Name:test-preload-285104 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-285104 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:24:20.832527   45293 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:24:20.832568   45293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:24:20.872244   45293 cri.go:89] found id: ""
	I0528 21:24:20.872312   45293 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0528 21:24:20.883178   45293 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 21:24:20.883235   45293 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 21:24:20.883242   45293 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 21:24:20.883288   45293 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 21:24:20.893714   45293 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:24:20.894198   45293 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-285104" does not appear in /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:24:20.894308   45293 kubeconfig.go:62] /home/jenkins/minikube-integration/18966-3963/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-285104" cluster setting kubeconfig missing "test-preload-285104" context setting]
	I0528 21:24:20.894577   45293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:24:20.895173   45293 kapi.go:59] client config for test-preload-285104: &rest.Config{Host:"https://192.168.39.188:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104/client.crt", KeyFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104/client.key", CAFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf8220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 21:24:20.895746   45293 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 21:24:20.905508   45293 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.188
	I0528 21:24:20.905537   45293 kubeadm.go:1154] stopping kube-system containers ...
	I0528 21:24:20.905548   45293 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0528 21:24:20.905598   45293 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:24:20.941862   45293 cri.go:89] found id: ""
	I0528 21:24:20.941921   45293 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 21:24:20.958304   45293 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:24:20.967762   45293 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:24:20.967782   45293 kubeadm.go:156] found existing configuration files:
	
	I0528 21:24:20.967818   45293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:24:20.976779   45293 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:24:20.976817   45293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:24:20.986305   45293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:24:20.995367   45293 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:24:20.995406   45293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:24:21.004634   45293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:24:21.013685   45293 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:24:21.013728   45293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:24:21.022701   45293 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:24:21.031475   45293 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:24:21.031508   45293 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:24:21.040731   45293 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:24:21.049968   45293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:24:21.145820   45293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:24:22.011762   45293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:24:22.284121   45293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:24:22.380353   45293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:24:22.470261   45293 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:24:22.470354   45293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:24:22.970909   45293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:24:23.471005   45293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:24:23.511230   45293 api_server.go:72] duration metric: took 1.040970969s to wait for apiserver process to appear ...
	I0528 21:24:23.511266   45293 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:24:23.511287   45293 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I0528 21:24:23.511784   45293 api_server.go:269] stopped: https://192.168.39.188:8443/healthz: Get "https://192.168.39.188:8443/healthz": dial tcp 192.168.39.188:8443: connect: connection refused
	I0528 21:24:24.011354   45293 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I0528 21:24:27.613585   45293 api_server.go:279] https://192.168.39.188:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:24:27.613613   45293 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:24:27.613626   45293 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I0528 21:24:27.689953   45293 api_server.go:279] https://192.168.39.188:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:24:27.689980   45293 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:24:28.011322   45293 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I0528 21:24:28.016280   45293 api_server.go:279] https://192.168.39.188:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:24:28.016300   45293 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:24:28.511850   45293 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I0528 21:24:28.522774   45293 api_server.go:279] https://192.168.39.188:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:24:28.522806   45293 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:24:29.011414   45293 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I0528 21:24:29.017260   45293 api_server.go:279] https://192.168.39.188:8443/healthz returned 200:
	ok
	I0528 21:24:29.023318   45293 api_server.go:141] control plane version: v1.24.4
	I0528 21:24:29.023342   45293 api_server.go:131] duration metric: took 5.512069803s to wait for apiserver health ...
	I0528 21:24:29.023349   45293 cni.go:84] Creating CNI manager for ""
	I0528 21:24:29.023356   45293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:24:29.025118   45293 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 21:24:29.026543   45293 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 21:24:29.037705   45293 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 21:24:29.064724   45293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:24:29.076430   45293 system_pods.go:59] 8 kube-system pods found
	I0528 21:24:29.076465   45293 system_pods.go:61] "coredns-6d4b75cb6d-6b782" [22d50d80-dd36-4a5d-bfca-7756cfd90473] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 21:24:29.076472   45293 system_pods.go:61] "coredns-6d4b75cb6d-kcd7d" [337096ba-9b57-4846-9eb9-ca453fbd634b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 21:24:29.076479   45293 system_pods.go:61] "etcd-test-preload-285104" [b26f754b-08e6-4096-96e7-5246fb1a6a94] Running
	I0528 21:24:29.076487   45293 system_pods.go:61] "kube-apiserver-test-preload-285104" [42cddee8-b7d5-4cac-b2f2-2dcb4bf75d17] Running
	I0528 21:24:29.076495   45293 system_pods.go:61] "kube-controller-manager-test-preload-285104" [442232bf-5442-4996-aa44-7671259d6224] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 21:24:29.076500   45293 system_pods.go:61] "kube-proxy-dxcb8" [46e042cb-9f2a-48d3-9574-8585927483de] Running
	I0528 21:24:29.076506   45293 system_pods.go:61] "kube-scheduler-test-preload-285104" [fdbf2202-e520-45ae-996a-e52365a10529] Running
	I0528 21:24:29.076513   45293 system_pods.go:61] "storage-provisioner" [b8847b1f-fc29-404e-b610-1e7d0999b34d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:24:29.076520   45293 system_pods.go:74] duration metric: took 11.779253ms to wait for pod list to return data ...
	I0528 21:24:29.076532   45293 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:24:29.079934   45293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:24:29.079963   45293 node_conditions.go:123] node cpu capacity is 2
	I0528 21:24:29.079975   45293 node_conditions.go:105] duration metric: took 3.438607ms to run NodePressure ...
	I0528 21:24:29.080000   45293 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:24:29.249616   45293 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0528 21:24:29.253748   45293 kubeadm.go:733] kubelet initialised
	I0528 21:24:29.253784   45293 kubeadm.go:734] duration metric: took 4.141895ms waiting for restarted kubelet to initialise ...
	I0528 21:24:29.253791   45293 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:24:29.260047   45293 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-6b782" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:29.272195   45293 pod_ready.go:97] node "test-preload-285104" hosting pod "coredns-6d4b75cb6d-6b782" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:29.272219   45293 pod_ready.go:81] duration metric: took 12.150479ms for pod "coredns-6d4b75cb6d-6b782" in "kube-system" namespace to be "Ready" ...
	E0528 21:24:29.272232   45293 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-285104" hosting pod "coredns-6d4b75cb6d-6b782" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:29.272239   45293 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-kcd7d" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:29.276506   45293 pod_ready.go:97] node "test-preload-285104" hosting pod "coredns-6d4b75cb6d-kcd7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:29.276531   45293 pod_ready.go:81] duration metric: took 4.282947ms for pod "coredns-6d4b75cb6d-kcd7d" in "kube-system" namespace to be "Ready" ...
	E0528 21:24:29.276542   45293 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-285104" hosting pod "coredns-6d4b75cb6d-kcd7d" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:29.276549   45293 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:29.280794   45293 pod_ready.go:97] node "test-preload-285104" hosting pod "etcd-test-preload-285104" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:29.280819   45293 pod_ready.go:81] duration metric: took 4.25721ms for pod "etcd-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	E0528 21:24:29.280830   45293 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-285104" hosting pod "etcd-test-preload-285104" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:29.280838   45293 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:29.467725   45293 pod_ready.go:97] node "test-preload-285104" hosting pod "kube-apiserver-test-preload-285104" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:29.467753   45293 pod_ready.go:81] duration metric: took 186.90176ms for pod "kube-apiserver-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	E0528 21:24:29.467761   45293 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-285104" hosting pod "kube-apiserver-test-preload-285104" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:29.467767   45293 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:29.872682   45293 pod_ready.go:97] node "test-preload-285104" hosting pod "kube-controller-manager-test-preload-285104" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:29.872707   45293 pod_ready.go:81] duration metric: took 404.93175ms for pod "kube-controller-manager-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	E0528 21:24:29.872716   45293 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-285104" hosting pod "kube-controller-manager-test-preload-285104" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:29.872722   45293 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dxcb8" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:30.269799   45293 pod_ready.go:97] node "test-preload-285104" hosting pod "kube-proxy-dxcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:30.269830   45293 pod_ready.go:81] duration metric: took 397.098538ms for pod "kube-proxy-dxcb8" in "kube-system" namespace to be "Ready" ...
	E0528 21:24:30.269840   45293 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-285104" hosting pod "kube-proxy-dxcb8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:30.269847   45293 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:30.668222   45293 pod_ready.go:97] node "test-preload-285104" hosting pod "kube-scheduler-test-preload-285104" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:30.668251   45293 pod_ready.go:81] duration metric: took 398.396462ms for pod "kube-scheduler-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	E0528 21:24:30.668261   45293 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-285104" hosting pod "kube-scheduler-test-preload-285104" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:30.668270   45293 pod_ready.go:38] duration metric: took 1.414471314s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:24:30.668301   45293 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 21:24:30.682562   45293 ops.go:34] apiserver oom_adj: -16
	I0528 21:24:30.682583   45293 kubeadm.go:591] duration metric: took 9.799335214s to restartPrimaryControlPlane
	I0528 21:24:30.682591   45293 kubeadm.go:393] duration metric: took 9.850154602s to StartCluster
	I0528 21:24:30.682606   45293 settings.go:142] acquiring lock: {Name:mkc1182212d780ab38539839372c25eb989b2e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:24:30.682681   45293 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:24:30.683297   45293 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:24:30.683499   45293 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 21:24:30.685054   45293 out.go:177] * Verifying Kubernetes components...
	I0528 21:24:30.683568   45293 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 21:24:30.683691   45293 config.go:182] Loaded profile config "test-preload-285104": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0528 21:24:30.686404   45293 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:24:30.686419   45293 addons.go:69] Setting default-storageclass=true in profile "test-preload-285104"
	I0528 21:24:30.686446   45293 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-285104"
	I0528 21:24:30.686403   45293 addons.go:69] Setting storage-provisioner=true in profile "test-preload-285104"
	I0528 21:24:30.686499   45293 addons.go:234] Setting addon storage-provisioner=true in "test-preload-285104"
	W0528 21:24:30.686511   45293 addons.go:243] addon storage-provisioner should already be in state true
	I0528 21:24:30.686539   45293 host.go:66] Checking if "test-preload-285104" exists ...
	I0528 21:24:30.686779   45293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:24:30.686794   45293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:24:30.686818   45293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:24:30.686925   45293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:24:30.701631   45293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38465
	I0528 21:24:30.702037   45293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46411
	I0528 21:24:30.702104   45293 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:24:30.702538   45293 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:24:30.702625   45293 main.go:141] libmachine: Using API Version  1
	I0528 21:24:30.702651   45293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:24:30.703010   45293 main.go:141] libmachine: Using API Version  1
	I0528 21:24:30.703028   45293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:24:30.703064   45293 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:24:30.703332   45293 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:24:30.703606   45293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:24:30.703643   45293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:24:30.703653   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetState
	I0528 21:24:30.706100   45293 kapi.go:59] client config for test-preload-285104: &rest.Config{Host:"https://192.168.39.188:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104/client.crt", KeyFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/profiles/test-preload-285104/client.key", CAFile:"/home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf8220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0528 21:24:30.706440   45293 addons.go:234] Setting addon default-storageclass=true in "test-preload-285104"
	W0528 21:24:30.706460   45293 addons.go:243] addon default-storageclass should already be in state true
	I0528 21:24:30.706488   45293 host.go:66] Checking if "test-preload-285104" exists ...
	I0528 21:24:30.707004   45293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:24:30.707047   45293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:24:30.718642   45293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0528 21:24:30.719091   45293 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:24:30.719636   45293 main.go:141] libmachine: Using API Version  1
	I0528 21:24:30.719662   45293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:24:30.720011   45293 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:24:30.720219   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetState
	I0528 21:24:30.721831   45293 main.go:141] libmachine: (test-preload-285104) Calling .DriverName
	I0528 21:24:30.724079   45293 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:24:30.722180   45293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44279
	I0528 21:24:30.724470   45293 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:24:30.725524   45293 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 21:24:30.725544   45293 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 21:24:30.725572   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHHostname
	I0528 21:24:30.725958   45293 main.go:141] libmachine: Using API Version  1
	I0528 21:24:30.725985   45293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:24:30.726325   45293 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:24:30.726824   45293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:24:30.726864   45293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:24:30.728870   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:30.729327   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:30.729356   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:30.729496   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHPort
	I0528 21:24:30.729685   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:30.729865   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHUsername
	I0528 21:24:30.730010   45293 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/test-preload-285104/id_rsa Username:docker}
	I0528 21:24:30.741948   45293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43881
	I0528 21:24:30.742369   45293 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:24:30.742812   45293 main.go:141] libmachine: Using API Version  1
	I0528 21:24:30.742841   45293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:24:30.743140   45293 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:24:30.743328   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetState
	I0528 21:24:30.744889   45293 main.go:141] libmachine: (test-preload-285104) Calling .DriverName
	I0528 21:24:30.745096   45293 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 21:24:30.745115   45293 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 21:24:30.745134   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHHostname
	I0528 21:24:30.748254   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:30.748692   45293 main.go:141] libmachine: (test-preload-285104) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:83:e0", ip: ""} in network mk-test-preload-285104: {Iface:virbr1 ExpiryTime:2024-05-28 22:20:50 +0000 UTC Type:0 Mac:52:54:00:7b:83:e0 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-285104 Clientid:01:52:54:00:7b:83:e0}
	I0528 21:24:30.748727   45293 main.go:141] libmachine: (test-preload-285104) DBG | domain test-preload-285104 has defined IP address 192.168.39.188 and MAC address 52:54:00:7b:83:e0 in network mk-test-preload-285104
	I0528 21:24:30.748905   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHPort
	I0528 21:24:30.749061   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHKeyPath
	I0528 21:24:30.749223   45293 main.go:141] libmachine: (test-preload-285104) Calling .GetSSHUsername
	I0528 21:24:30.749360   45293 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/test-preload-285104/id_rsa Username:docker}
	I0528 21:24:30.868517   45293 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:24:30.885546   45293 node_ready.go:35] waiting up to 6m0s for node "test-preload-285104" to be "Ready" ...
	I0528 21:24:30.968746   45293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 21:24:31.020969   45293 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 21:24:31.883744   45293 main.go:141] libmachine: Making call to close driver server
	I0528 21:24:31.883780   45293 main.go:141] libmachine: (test-preload-285104) Calling .Close
	I0528 21:24:31.883766   45293 main.go:141] libmachine: Making call to close driver server
	I0528 21:24:31.883867   45293 main.go:141] libmachine: (test-preload-285104) Calling .Close
	I0528 21:24:31.884059   45293 main.go:141] libmachine: Successfully made call to close driver server
	I0528 21:24:31.884078   45293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 21:24:31.884096   45293 main.go:141] libmachine: (test-preload-285104) DBG | Closing plugin on server side
	I0528 21:24:31.884106   45293 main.go:141] libmachine: Successfully made call to close driver server
	I0528 21:24:31.884113   45293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 21:24:31.884136   45293 main.go:141] libmachine: Making call to close driver server
	I0528 21:24:31.884145   45293 main.go:141] libmachine: (test-preload-285104) Calling .Close
	I0528 21:24:31.884167   45293 main.go:141] libmachine: Making call to close driver server
	I0528 21:24:31.884182   45293 main.go:141] libmachine: (test-preload-285104) Calling .Close
	I0528 21:24:31.884351   45293 main.go:141] libmachine: Successfully made call to close driver server
	I0528 21:24:31.884365   45293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 21:24:31.884383   45293 main.go:141] libmachine: Successfully made call to close driver server
	I0528 21:24:31.884396   45293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 21:24:31.884411   45293 main.go:141] libmachine: (test-preload-285104) DBG | Closing plugin on server side
	I0528 21:24:31.894503   45293 main.go:141] libmachine: Making call to close driver server
	I0528 21:24:31.894525   45293 main.go:141] libmachine: (test-preload-285104) Calling .Close
	I0528 21:24:31.894763   45293 main.go:141] libmachine: (test-preload-285104) DBG | Closing plugin on server side
	I0528 21:24:31.894805   45293 main.go:141] libmachine: Successfully made call to close driver server
	I0528 21:24:31.894817   45293 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 21:24:31.896788   45293 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0528 21:24:31.897993   45293 addons.go:510] duration metric: took 1.214433456s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0528 21:24:32.890070   45293 node_ready.go:53] node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:34.891915   45293 node_ready.go:53] node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:37.389666   45293 node_ready.go:53] node "test-preload-285104" has status "Ready":"False"
	I0528 21:24:38.389281   45293 node_ready.go:49] node "test-preload-285104" has status "Ready":"True"
	I0528 21:24:38.389306   45293 node_ready.go:38] duration metric: took 7.503722451s for node "test-preload-285104" to be "Ready" ...
	I0528 21:24:38.389318   45293 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:24:38.394833   45293 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-6b782" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:38.398900   45293 pod_ready.go:92] pod "coredns-6d4b75cb6d-6b782" in "kube-system" namespace has status "Ready":"True"
	I0528 21:24:38.398916   45293 pod_ready.go:81] duration metric: took 4.057327ms for pod "coredns-6d4b75cb6d-6b782" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:38.398923   45293 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:40.407763   45293 pod_ready.go:102] pod "etcd-test-preload-285104" in "kube-system" namespace has status "Ready":"False"
	I0528 21:24:42.905082   45293 pod_ready.go:92] pod "etcd-test-preload-285104" in "kube-system" namespace has status "Ready":"True"
	I0528 21:24:42.905106   45293 pod_ready.go:81] duration metric: took 4.506176395s for pod "etcd-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:42.905116   45293 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:42.910008   45293 pod_ready.go:92] pod "kube-apiserver-test-preload-285104" in "kube-system" namespace has status "Ready":"True"
	I0528 21:24:42.910030   45293 pod_ready.go:81] duration metric: took 4.902685ms for pod "kube-apiserver-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:42.910038   45293 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:42.913881   45293 pod_ready.go:92] pod "kube-controller-manager-test-preload-285104" in "kube-system" namespace has status "Ready":"True"
	I0528 21:24:42.913900   45293 pod_ready.go:81] duration metric: took 3.856245ms for pod "kube-controller-manager-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:42.913909   45293 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dxcb8" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:42.917934   45293 pod_ready.go:92] pod "kube-proxy-dxcb8" in "kube-system" namespace has status "Ready":"True"
	I0528 21:24:42.917953   45293 pod_ready.go:81] duration metric: took 4.038498ms for pod "kube-proxy-dxcb8" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:42.917960   45293 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:42.921841   45293 pod_ready.go:92] pod "kube-scheduler-test-preload-285104" in "kube-system" namespace has status "Ready":"True"
	I0528 21:24:42.921864   45293 pod_ready.go:81] duration metric: took 3.897544ms for pod "kube-scheduler-test-preload-285104" in "kube-system" namespace to be "Ready" ...
	I0528 21:24:42.921875   45293 pod_ready.go:38] duration metric: took 4.532542465s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:24:42.921891   45293 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:24:42.921955   45293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:24:42.936830   45293 api_server.go:72] duration metric: took 12.253301458s to wait for apiserver process to appear ...
	I0528 21:24:42.936855   45293 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:24:42.936872   45293 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I0528 21:24:42.942008   45293 api_server.go:279] https://192.168.39.188:8443/healthz returned 200:
	ok
	I0528 21:24:42.943091   45293 api_server.go:141] control plane version: v1.24.4
	I0528 21:24:42.943116   45293 api_server.go:131] duration metric: took 6.25386ms to wait for apiserver health ...
	I0528 21:24:42.943127   45293 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:24:43.105670   45293 system_pods.go:59] 7 kube-system pods found
	I0528 21:24:43.105705   45293 system_pods.go:61] "coredns-6d4b75cb6d-6b782" [22d50d80-dd36-4a5d-bfca-7756cfd90473] Running
	I0528 21:24:43.105713   45293 system_pods.go:61] "etcd-test-preload-285104" [b26f754b-08e6-4096-96e7-5246fb1a6a94] Running
	I0528 21:24:43.105719   45293 system_pods.go:61] "kube-apiserver-test-preload-285104" [42cddee8-b7d5-4cac-b2f2-2dcb4bf75d17] Running
	I0528 21:24:43.105727   45293 system_pods.go:61] "kube-controller-manager-test-preload-285104" [442232bf-5442-4996-aa44-7671259d6224] Running
	I0528 21:24:43.105732   45293 system_pods.go:61] "kube-proxy-dxcb8" [46e042cb-9f2a-48d3-9574-8585927483de] Running
	I0528 21:24:43.105738   45293 system_pods.go:61] "kube-scheduler-test-preload-285104" [fdbf2202-e520-45ae-996a-e52365a10529] Running
	I0528 21:24:43.105742   45293 system_pods.go:61] "storage-provisioner" [b8847b1f-fc29-404e-b610-1e7d0999b34d] Running
	I0528 21:24:43.105750   45293 system_pods.go:74] duration metric: took 162.616822ms to wait for pod list to return data ...
	I0528 21:24:43.105779   45293 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:24:43.302824   45293 default_sa.go:45] found service account: "default"
	I0528 21:24:43.302853   45293 default_sa.go:55] duration metric: took 197.066778ms for default service account to be created ...
	I0528 21:24:43.302864   45293 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:24:43.505400   45293 system_pods.go:86] 7 kube-system pods found
	I0528 21:24:43.505440   45293 system_pods.go:89] "coredns-6d4b75cb6d-6b782" [22d50d80-dd36-4a5d-bfca-7756cfd90473] Running
	I0528 21:24:43.505446   45293 system_pods.go:89] "etcd-test-preload-285104" [b26f754b-08e6-4096-96e7-5246fb1a6a94] Running
	I0528 21:24:43.505450   45293 system_pods.go:89] "kube-apiserver-test-preload-285104" [42cddee8-b7d5-4cac-b2f2-2dcb4bf75d17] Running
	I0528 21:24:43.505454   45293 system_pods.go:89] "kube-controller-manager-test-preload-285104" [442232bf-5442-4996-aa44-7671259d6224] Running
	I0528 21:24:43.505463   45293 system_pods.go:89] "kube-proxy-dxcb8" [46e042cb-9f2a-48d3-9574-8585927483de] Running
	I0528 21:24:43.505468   45293 system_pods.go:89] "kube-scheduler-test-preload-285104" [fdbf2202-e520-45ae-996a-e52365a10529] Running
	I0528 21:24:43.505471   45293 system_pods.go:89] "storage-provisioner" [b8847b1f-fc29-404e-b610-1e7d0999b34d] Running
	I0528 21:24:43.505478   45293 system_pods.go:126] duration metric: took 202.608119ms to wait for k8s-apps to be running ...
	I0528 21:24:43.505487   45293 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:24:43.505538   45293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:24:43.519923   45293 system_svc.go:56] duration metric: took 14.427795ms WaitForService to wait for kubelet
	I0528 21:24:43.519947   45293 kubeadm.go:576] duration metric: took 12.836423714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:24:43.519963   45293 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:24:43.702645   45293 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:24:43.702673   45293 node_conditions.go:123] node cpu capacity is 2
	I0528 21:24:43.702684   45293 node_conditions.go:105] duration metric: took 182.718046ms to run NodePressure ...
	I0528 21:24:43.702697   45293 start.go:240] waiting for startup goroutines ...
	I0528 21:24:43.702706   45293 start.go:245] waiting for cluster config update ...
	I0528 21:24:43.702720   45293 start.go:254] writing updated cluster config ...
	I0528 21:24:43.703018   45293 ssh_runner.go:195] Run: rm -f paused
	I0528 21:24:43.747449   45293 start.go:600] kubectl: 1.30.1, cluster: 1.24.4 (minor skew: 6)
	I0528 21:24:43.749592   45293 out.go:177] 
	W0528 21:24:43.751173   45293 out.go:239] ! /usr/local/bin/kubectl is version 1.30.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0528 21:24:43.752821   45293 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0528 21:24:43.754366   45293 out.go:177] * Done! kubectl is now configured to use "test-preload-285104" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.621193721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931484621174925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e365e03-4bde-4e5d-8d45-9a49208447b7 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.621955805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=154b5e58-2dc0-4479-a3a0-9988b7b79bc7 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.622021633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=154b5e58-2dc0-4479-a3a0-9988b7b79bc7 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.622167336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00f97b8f0208d10ee8384ea4973e82d31a35c231aa4c3c05f976f84bbdcb6a00,PodSandboxId:a69cb82b2423d788501f1487e638edd1e407744ac92b611ead00cd87fbaf5550,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1716931476922602836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6b782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d50d80-dd36-4a5d-bfca-7756cfd90473,},Annotations:map[string]string{io.kubernetes.container.hash: 2bf30190,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d63ba3d9d1dd5b0f7e9a608f84e79f06f46a74c366aaf0b9e7b4382baeec4d,PodSandboxId:02dac869a741f4bb2a0a6a36bfb57c5574b8cd44fad1c5c8e5a6ef52601fb597,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716931469738601724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: b8847b1f-fc29-404e-b610-1e7d0999b34d,},Annotations:map[string]string{io.kubernetes.container.hash: b23055ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967c7aa24a49b937c08465daf8b59162efd27a6bf07083b6d1787152cc2dc178,PodSandboxId:771bdce73e6f446959e4789068d8f9b6ed2cd13d7b6b8a1cfa9477af426ccf2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1716931469403962911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxcb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46
e042cb-9f2a-48d3-9574-8585927483de,},Annotations:map[string]string{io.kubernetes.container.hash: d3745f6c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b949eb6ab1b26fcd47b0beb99d70f9358d3eb07f5b59a0bb52ce7e2a65fab534,PodSandboxId:8123017300bbae13a73e75e1f2b0be3b8b65b5ad862f01b11bcd5ad0d327d0e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1716931463184262487,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c42ec51cc47e73f05088f5b7bfcc78b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2b2195ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02aa4cd394b4167103165111a0a4f462c823b7018dde9fe3d3f2eb7e7d92ef0b,PodSandboxId:99d44ef5857801aa8db7511e38bf6b27946901ef353f176e938ad94f14fab249,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1716931463164559024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea0d38e260e5acda608f557c4260094,},Annotations:map
[string]string{io.kubernetes.container.hash: 844749df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:958c4723761bcbd29ed9773a0bdfc156a83746bab9634df7668b600d6c454aab,PodSandboxId:61b54417d3a32bc6f36660f946a4f9b0d79b9c9ee72f35be439de5c607eab46a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1716931463223221227,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c9023d98cbebfd21b929f4a867c3d04,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8887ee40ecffdbc29ee4b8127d5b3d0cf53f341ea86b4f20f49a106907340c4,PodSandboxId:134de5917613a1f7c576a046f19c362e94659bfbf336fda7b241057527d34b14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1716931463152336000,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efbc1469eed9d92555957bc485459752,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=154b5e58-2dc0-4479-a3a0-9988b7b79bc7 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.658408918Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9188ea48-a780-4d48-a297-c6b72d89ca5e name=/runtime.v1.RuntimeService/Version
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.658476942Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9188ea48-a780-4d48-a297-c6b72d89ca5e name=/runtime.v1.RuntimeService/Version
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.660045026Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7612079f-2fbf-4fed-8271-1a589b5d2596 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.660463276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931484660442171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7612079f-2fbf-4fed-8271-1a589b5d2596 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.661169241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f96c311e-f57c-4a38-b36d-fc318edc8430 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.661233329Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f96c311e-f57c-4a38-b36d-fc318edc8430 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.661382762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00f97b8f0208d10ee8384ea4973e82d31a35c231aa4c3c05f976f84bbdcb6a00,PodSandboxId:a69cb82b2423d788501f1487e638edd1e407744ac92b611ead00cd87fbaf5550,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1716931476922602836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6b782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d50d80-dd36-4a5d-bfca-7756cfd90473,},Annotations:map[string]string{io.kubernetes.container.hash: 2bf30190,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d63ba3d9d1dd5b0f7e9a608f84e79f06f46a74c366aaf0b9e7b4382baeec4d,PodSandboxId:02dac869a741f4bb2a0a6a36bfb57c5574b8cd44fad1c5c8e5a6ef52601fb597,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716931469738601724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: b8847b1f-fc29-404e-b610-1e7d0999b34d,},Annotations:map[string]string{io.kubernetes.container.hash: b23055ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967c7aa24a49b937c08465daf8b59162efd27a6bf07083b6d1787152cc2dc178,PodSandboxId:771bdce73e6f446959e4789068d8f9b6ed2cd13d7b6b8a1cfa9477af426ccf2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1716931469403962911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxcb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46
e042cb-9f2a-48d3-9574-8585927483de,},Annotations:map[string]string{io.kubernetes.container.hash: d3745f6c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b949eb6ab1b26fcd47b0beb99d70f9358d3eb07f5b59a0bb52ce7e2a65fab534,PodSandboxId:8123017300bbae13a73e75e1f2b0be3b8b65b5ad862f01b11bcd5ad0d327d0e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1716931463184262487,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c42ec51cc47e73f05088f5b7bfcc78b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2b2195ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02aa4cd394b4167103165111a0a4f462c823b7018dde9fe3d3f2eb7e7d92ef0b,PodSandboxId:99d44ef5857801aa8db7511e38bf6b27946901ef353f176e938ad94f14fab249,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1716931463164559024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea0d38e260e5acda608f557c4260094,},Annotations:map
[string]string{io.kubernetes.container.hash: 844749df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:958c4723761bcbd29ed9773a0bdfc156a83746bab9634df7668b600d6c454aab,PodSandboxId:61b54417d3a32bc6f36660f946a4f9b0d79b9c9ee72f35be439de5c607eab46a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1716931463223221227,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c9023d98cbebfd21b929f4a867c3d04,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8887ee40ecffdbc29ee4b8127d5b3d0cf53f341ea86b4f20f49a106907340c4,PodSandboxId:134de5917613a1f7c576a046f19c362e94659bfbf336fda7b241057527d34b14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1716931463152336000,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efbc1469eed9d92555957bc485459752,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f96c311e-f57c-4a38-b36d-fc318edc8430 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.698723501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c260a25-5cb1-4775-9dcd-330c52cdf761 name=/runtime.v1.RuntimeService/Version
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.698923876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c260a25-5cb1-4775-9dcd-330c52cdf761 name=/runtime.v1.RuntimeService/Version
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.700102861Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56e02f2f-2967-44c0-933e-e7ed1d8d07aa name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.700518139Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931484700499501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56e02f2f-2967-44c0-933e-e7ed1d8d07aa name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.701188372Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be1c224c-7dba-4142-9d0e-c569534cd63a name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.701235632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be1c224c-7dba-4142-9d0e-c569534cd63a name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.701438632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00f97b8f0208d10ee8384ea4973e82d31a35c231aa4c3c05f976f84bbdcb6a00,PodSandboxId:a69cb82b2423d788501f1487e638edd1e407744ac92b611ead00cd87fbaf5550,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1716931476922602836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6b782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d50d80-dd36-4a5d-bfca-7756cfd90473,},Annotations:map[string]string{io.kubernetes.container.hash: 2bf30190,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d63ba3d9d1dd5b0f7e9a608f84e79f06f46a74c366aaf0b9e7b4382baeec4d,PodSandboxId:02dac869a741f4bb2a0a6a36bfb57c5574b8cd44fad1c5c8e5a6ef52601fb597,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716931469738601724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: b8847b1f-fc29-404e-b610-1e7d0999b34d,},Annotations:map[string]string{io.kubernetes.container.hash: b23055ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967c7aa24a49b937c08465daf8b59162efd27a6bf07083b6d1787152cc2dc178,PodSandboxId:771bdce73e6f446959e4789068d8f9b6ed2cd13d7b6b8a1cfa9477af426ccf2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1716931469403962911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxcb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46
e042cb-9f2a-48d3-9574-8585927483de,},Annotations:map[string]string{io.kubernetes.container.hash: d3745f6c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b949eb6ab1b26fcd47b0beb99d70f9358d3eb07f5b59a0bb52ce7e2a65fab534,PodSandboxId:8123017300bbae13a73e75e1f2b0be3b8b65b5ad862f01b11bcd5ad0d327d0e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1716931463184262487,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c42ec51cc47e73f05088f5b7bfcc78b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2b2195ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02aa4cd394b4167103165111a0a4f462c823b7018dde9fe3d3f2eb7e7d92ef0b,PodSandboxId:99d44ef5857801aa8db7511e38bf6b27946901ef353f176e938ad94f14fab249,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1716931463164559024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea0d38e260e5acda608f557c4260094,},Annotations:map
[string]string{io.kubernetes.container.hash: 844749df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:958c4723761bcbd29ed9773a0bdfc156a83746bab9634df7668b600d6c454aab,PodSandboxId:61b54417d3a32bc6f36660f946a4f9b0d79b9c9ee72f35be439de5c607eab46a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1716931463223221227,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c9023d98cbebfd21b929f4a867c3d04,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8887ee40ecffdbc29ee4b8127d5b3d0cf53f341ea86b4f20f49a106907340c4,PodSandboxId:134de5917613a1f7c576a046f19c362e94659bfbf336fda7b241057527d34b14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1716931463152336000,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efbc1469eed9d92555957bc485459752,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be1c224c-7dba-4142-9d0e-c569534cd63a name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.733112644Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa5b94a8-c3b5-440c-a925-9272ea10fa95 name=/runtime.v1.RuntimeService/Version
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.733194066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa5b94a8-c3b5-440c-a925-9272ea10fa95 name=/runtime.v1.RuntimeService/Version
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.735120464Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b70b435-81c0-4705-9bac-d594c1de5d63 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.735560146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931484735540694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b70b435-81c0-4705-9bac-d594c1de5d63 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.736452948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=916629f0-3d75-44f7-9351-1362302cf99e name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.736519709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=916629f0-3d75-44f7-9351-1362302cf99e name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:24:44 test-preload-285104 crio[701]: time="2024-05-28 21:24:44.736690354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00f97b8f0208d10ee8384ea4973e82d31a35c231aa4c3c05f976f84bbdcb6a00,PodSandboxId:a69cb82b2423d788501f1487e638edd1e407744ac92b611ead00cd87fbaf5550,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1716931476922602836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-6b782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d50d80-dd36-4a5d-bfca-7756cfd90473,},Annotations:map[string]string{io.kubernetes.container.hash: 2bf30190,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d63ba3d9d1dd5b0f7e9a608f84e79f06f46a74c366aaf0b9e7b4382baeec4d,PodSandboxId:02dac869a741f4bb2a0a6a36bfb57c5574b8cd44fad1c5c8e5a6ef52601fb597,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716931469738601724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: b8847b1f-fc29-404e-b610-1e7d0999b34d,},Annotations:map[string]string{io.kubernetes.container.hash: b23055ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967c7aa24a49b937c08465daf8b59162efd27a6bf07083b6d1787152cc2dc178,PodSandboxId:771bdce73e6f446959e4789068d8f9b6ed2cd13d7b6b8a1cfa9477af426ccf2d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1716931469403962911,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxcb8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46
e042cb-9f2a-48d3-9574-8585927483de,},Annotations:map[string]string{io.kubernetes.container.hash: d3745f6c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b949eb6ab1b26fcd47b0beb99d70f9358d3eb07f5b59a0bb52ce7e2a65fab534,PodSandboxId:8123017300bbae13a73e75e1f2b0be3b8b65b5ad862f01b11bcd5ad0d327d0e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1716931463184262487,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c42ec51cc47e73f05088f5b7bfcc78b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2b2195ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02aa4cd394b4167103165111a0a4f462c823b7018dde9fe3d3f2eb7e7d92ef0b,PodSandboxId:99d44ef5857801aa8db7511e38bf6b27946901ef353f176e938ad94f14fab249,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1716931463164559024,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea0d38e260e5acda608f557c4260094,},Annotations:map
[string]string{io.kubernetes.container.hash: 844749df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:958c4723761bcbd29ed9773a0bdfc156a83746bab9634df7668b600d6c454aab,PodSandboxId:61b54417d3a32bc6f36660f946a4f9b0d79b9c9ee72f35be439de5c607eab46a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1716931463223221227,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c9023d98cbebfd21b929f4a867c3d04,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8887ee40ecffdbc29ee4b8127d5b3d0cf53f341ea86b4f20f49a106907340c4,PodSandboxId:134de5917613a1f7c576a046f19c362e94659bfbf336fda7b241057527d34b14,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1716931463152336000,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-285104,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efbc1469eed9d92555957bc485459752,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=916629f0-3d75-44f7-9351-1362302cf99e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	00f97b8f0208d       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   a69cb82b2423d       coredns-6d4b75cb6d-6b782
	82d63ba3d9d1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   02dac869a741f       storage-provisioner
	967c7aa24a49b       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   771bdce73e6f4       kube-proxy-dxcb8
	958c4723761bc       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   61b54417d3a32       kube-controller-manager-test-preload-285104
	b949eb6ab1b26       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   8123017300bba       etcd-test-preload-285104
	02aa4cd394b41       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   99d44ef585780       kube-apiserver-test-preload-285104
	c8887ee40ecff       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   134de5917613a       kube-scheduler-test-preload-285104
	
	
	==> coredns [00f97b8f0208d10ee8384ea4973e82d31a35c231aa4c3c05f976f84bbdcb6a00] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:40391 - 36322 "HINFO IN 945141437730494980.5150493746815843214. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017366378s
	
	
	==> describe nodes <==
	Name:               test-preload-285104
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-285104
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=test-preload-285104
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T21_23_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:23:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-285104
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:24:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:24:38 +0000   Tue, 28 May 2024 21:22:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:24:38 +0000   Tue, 28 May 2024 21:22:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:24:38 +0000   Tue, 28 May 2024 21:22:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:24:38 +0000   Tue, 28 May 2024 21:24:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.188
	  Hostname:    test-preload-285104
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6373e3dad9db4f21a66491f716a239b3
	  System UUID:                6373e3da-d9db-4f21-a664-91f716a239b3
	  Boot ID:                    ec137dd5-9e94-454f-bb7f-ab9c69265e14
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-6b782                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     87s
	  kube-system                 etcd-test-preload-285104                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         100s
	  kube-system                 kube-apiserver-test-preload-285104             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-controller-manager-test-preload-285104    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-proxy-dxcb8                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-test-preload-285104             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  Starting                 85s                kube-proxy       
	  Normal  Starting                 100s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  100s               kubelet          Node test-preload-285104 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s               kubelet          Node test-preload-285104 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s               kubelet          Node test-preload-285104 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  100s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                90s                kubelet          Node test-preload-285104 status is now: NodeReady
	  Normal  RegisteredNode           88s                node-controller  Node test-preload-285104 event: Registered Node test-preload-285104 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-285104 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-285104 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-285104 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                 node-controller  Node test-preload-285104 event: Registered Node test-preload-285104 in Controller
	
	
	==> dmesg <==
	[May28 21:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050926] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040521] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.498724] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.341852] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603389] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May28 21:24] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.058171] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056268] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.190861] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.114824] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.258830] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[ +12.851458] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.059952] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.859901] systemd-fstab-generator[1087]: Ignoring "noauto" option for root device
	[  +4.503296] kauditd_printk_skb: 105 callbacks suppressed
	[  +4.057198] systemd-fstab-generator[1718]: Ignoring "noauto" option for root device
	[  +5.979744] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [b949eb6ab1b26fcd47b0beb99d70f9358d3eb07f5b59a0bb52ce7e2a65fab534] <==
	{"level":"info","ts":"2024-05-28T21:24:23.595Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"7b6f02fe5f633d8","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-05-28T21:24:23.605Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-28T21:24:23.621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 switched to configuration voters=(555895692539081688)"}
	{"level":"info","ts":"2024-05-28T21:24:23.622Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7653764497079f73","local-member-id":"7b6f02fe5f633d8","added-peer-id":"7b6f02fe5f633d8","added-peer-peer-urls":["https://192.168.39.188:2380"]}
	{"level":"info","ts":"2024-05-28T21:24:23.622Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-28T21:24:23.622Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7b6f02fe5f633d8","initial-advertise-peer-urls":["https://192.168.39.188:2380"],"listen-peer-urls":["https://192.168.39.188:2380"],"advertise-client-urls":["https://192.168.39.188:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.188:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T21:24:23.626Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T21:24:23.626Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7653764497079f73","local-member-id":"7b6f02fe5f633d8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:24:23.627Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:24:23.627Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.188:2380"}
	{"level":"info","ts":"2024-05-28T21:24:23.627Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.188:2380"}
	{"level":"info","ts":"2024-05-28T21:24:25.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-28T21:24:25.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-28T21:24:25.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 received MsgPreVoteResp from 7b6f02fe5f633d8 at term 2"}
	{"level":"info","ts":"2024-05-28T21:24:25.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 became candidate at term 3"}
	{"level":"info","ts":"2024-05-28T21:24:25.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 received MsgVoteResp from 7b6f02fe5f633d8 at term 3"}
	{"level":"info","ts":"2024-05-28T21:24:25.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 became leader at term 3"}
	{"level":"info","ts":"2024-05-28T21:24:25.159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7b6f02fe5f633d8 elected leader 7b6f02fe5f633d8 at term 3"}
	{"level":"info","ts":"2024-05-28T21:24:25.159Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"7b6f02fe5f633d8","local-member-attributes":"{Name:test-preload-285104 ClientURLs:[https://192.168.39.188:2379]}","request-path":"/0/members/7b6f02fe5f633d8/attributes","cluster-id":"7653764497079f73","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T21:24:25.160Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:24:25.160Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:24:25.161Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.188:2379"}
	{"level":"info","ts":"2024-05-28T21:24:25.161Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T21:24:25.161Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T21:24:25.162Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:24:45 up 0 min,  0 users,  load average: 1.09, 0.29, 0.10
	Linux test-preload-285104 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [02aa4cd394b4167103165111a0a4f462c823b7018dde9fe3d3f2eb7e7d92ef0b] <==
	I0528 21:24:27.564007       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0528 21:24:27.564029       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0528 21:24:27.564042       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0528 21:24:27.569051       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0528 21:24:27.569061       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0528 21:24:27.577631       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0528 21:24:27.628679       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0528 21:24:27.678872       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0528 21:24:27.708269       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0528 21:24:27.715680       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0528 21:24:27.715783       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0528 21:24:27.716211       1 cache.go:39] Caches are synced for autoregister controller
	I0528 21:24:27.730559       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0528 21:24:27.748305       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0528 21:24:27.751048       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0528 21:24:28.241296       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0528 21:24:28.566922       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0528 21:24:29.174271       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0528 21:24:29.183117       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0528 21:24:29.212158       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0528 21:24:29.230489       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0528 21:24:29.236042       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0528 21:24:29.775037       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0528 21:24:40.383222       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0528 21:24:40.588843       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [958c4723761bcbd29ed9773a0bdfc156a83746bab9634df7668b600d6c454aab] <==
	I0528 21:24:40.383553       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0528 21:24:40.393920       1 shared_informer.go:262] Caches are synced for node
	I0528 21:24:40.394104       1 range_allocator.go:173] Starting range CIDR allocator
	I0528 21:24:40.394131       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0528 21:24:40.394187       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0528 21:24:40.397892       1 shared_informer.go:262] Caches are synced for attach detach
	I0528 21:24:40.402857       1 shared_informer.go:262] Caches are synced for ephemeral
	I0528 21:24:40.428388       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0528 21:24:40.478008       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0528 21:24:40.505631       1 shared_informer.go:262] Caches are synced for endpoint
	I0528 21:24:40.526502       1 shared_informer.go:262] Caches are synced for taint
	I0528 21:24:40.526613       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0528 21:24:40.526703       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-285104. Assuming now as a timestamp.
	I0528 21:24:40.526782       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0528 21:24:40.527014       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0528 21:24:40.527213       1 event.go:294] "Event occurred" object="test-preload-285104" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-285104 event: Registered Node test-preload-285104 in Controller"
	I0528 21:24:40.541660       1 shared_informer.go:262] Caches are synced for daemon sets
	I0528 21:24:40.585288       1 shared_informer.go:262] Caches are synced for resource quota
	I0528 21:24:40.612210       1 shared_informer.go:262] Caches are synced for resource quota
	I0528 21:24:40.615577       1 shared_informer.go:262] Caches are synced for deployment
	I0528 21:24:40.627144       1 shared_informer.go:262] Caches are synced for disruption
	I0528 21:24:40.627156       1 disruption.go:371] Sending events to api server.
	I0528 21:24:41.012176       1 shared_informer.go:262] Caches are synced for garbage collector
	I0528 21:24:41.012212       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0528 21:24:41.032064       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [967c7aa24a49b937c08465daf8b59162efd27a6bf07083b6d1787152cc2dc178] <==
	I0528 21:24:29.674079       1 node.go:163] Successfully retrieved node IP: 192.168.39.188
	I0528 21:24:29.674306       1 server_others.go:138] "Detected node IP" address="192.168.39.188"
	I0528 21:24:29.674367       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0528 21:24:29.760632       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0528 21:24:29.760670       1 server_others.go:206] "Using iptables Proxier"
	I0528 21:24:29.761067       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0528 21:24:29.761620       1 server.go:661] "Version info" version="v1.24.4"
	I0528 21:24:29.761638       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:24:29.764100       1 config.go:317] "Starting service config controller"
	I0528 21:24:29.764122       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0528 21:24:29.764145       1 config.go:226] "Starting endpoint slice config controller"
	I0528 21:24:29.764211       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0528 21:24:29.768057       1 config.go:444] "Starting node config controller"
	I0528 21:24:29.768084       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0528 21:24:29.865911       1 shared_informer.go:262] Caches are synced for service config
	I0528 21:24:29.869293       1 shared_informer.go:262] Caches are synced for node config
	I0528 21:24:29.875066       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c8887ee40ecffdbc29ee4b8127d5b3d0cf53f341ea86b4f20f49a106907340c4] <==
	I0528 21:24:24.308805       1 serving.go:348] Generated self-signed cert in-memory
	W0528 21:24:27.638839       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 21:24:27.640798       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:24:27.640838       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 21:24:27.640847       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 21:24:27.690849       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0528 21:24:27.690944       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:24:27.700202       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0528 21:24:27.700774       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0528 21:24:27.700895       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 21:24:27.701107       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 21:24:27.801433       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 21:24:28 test-preload-285104 kubelet[1094]: I0528 21:24:28.471498    1094 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hscxw\" (UniqueName: \"kubernetes.io/projected/b8847b1f-fc29-404e-b610-1e7d0999b34d-kube-api-access-hscxw\") pod \"storage-provisioner\" (UID: \"b8847b1f-fc29-404e-b610-1e7d0999b34d\") " pod="kube-system/storage-provisioner"
	May 28 21:24:28 test-preload-285104 kubelet[1094]: I0528 21:24:28.471517    1094 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46e042cb-9f2a-48d3-9574-8585927483de-kube-proxy\") pod \"kube-proxy-dxcb8\" (UID: \"46e042cb-9f2a-48d3-9574-8585927483de\") " pod="kube-system/kube-proxy-dxcb8"
	May 28 21:24:28 test-preload-285104 kubelet[1094]: I0528 21:24:28.471557    1094 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46e042cb-9f2a-48d3-9574-8585927483de-xtables-lock\") pod \"kube-proxy-dxcb8\" (UID: \"46e042cb-9f2a-48d3-9574-8585927483de\") " pod="kube-system/kube-proxy-dxcb8"
	May 28 21:24:28 test-preload-285104 kubelet[1094]: I0528 21:24:28.471589    1094 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46e042cb-9f2a-48d3-9574-8585927483de-lib-modules\") pod \"kube-proxy-dxcb8\" (UID: \"46e042cb-9f2a-48d3-9574-8585927483de\") " pod="kube-system/kube-proxy-dxcb8"
	May 28 21:24:28 test-preload-285104 kubelet[1094]: I0528 21:24:28.471627    1094 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmzmd\" (UniqueName: \"kubernetes.io/projected/22d50d80-dd36-4a5d-bfca-7756cfd90473-kube-api-access-wmzmd\") pod \"coredns-6d4b75cb6d-6b782\" (UID: \"22d50d80-dd36-4a5d-bfca-7756cfd90473\") " pod="kube-system/coredns-6d4b75cb6d-6b782"
	May 28 21:24:28 test-preload-285104 kubelet[1094]: I0528 21:24:28.471650    1094 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7gkw\" (UniqueName: \"kubernetes.io/projected/46e042cb-9f2a-48d3-9574-8585927483de-kube-api-access-m7gkw\") pod \"kube-proxy-dxcb8\" (UID: \"46e042cb-9f2a-48d3-9574-8585927483de\") " pod="kube-system/kube-proxy-dxcb8"
	May 28 21:24:28 test-preload-285104 kubelet[1094]: I0528 21:24:28.471661    1094 reconciler.go:159] "Reconciler: start to sync state"
	May 28 21:24:28 test-preload-285104 kubelet[1094]: I0528 21:24:28.948324    1094 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz4z4\" (UniqueName: \"kubernetes.io/projected/337096ba-9b57-4846-9eb9-ca453fbd634b-kube-api-access-dz4z4\") pod \"337096ba-9b57-4846-9eb9-ca453fbd634b\" (UID: \"337096ba-9b57-4846-9eb9-ca453fbd634b\") "
	May 28 21:24:28 test-preload-285104 kubelet[1094]: I0528 21:24:28.949100    1094 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/337096ba-9b57-4846-9eb9-ca453fbd634b-config-volume\") pod \"337096ba-9b57-4846-9eb9-ca453fbd634b\" (UID: \"337096ba-9b57-4846-9eb9-ca453fbd634b\") "
	May 28 21:24:28 test-preload-285104 kubelet[1094]: W0528 21:24:28.949404    1094 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/337096ba-9b57-4846-9eb9-ca453fbd634b/volumes/kubernetes.io~projected/kube-api-access-dz4z4: clearQuota called, but quotas disabled
	May 28 21:24:28 test-preload-285104 kubelet[1094]: I0528 21:24:28.949634    1094 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/337096ba-9b57-4846-9eb9-ca453fbd634b-kube-api-access-dz4z4" (OuterVolumeSpecName: "kube-api-access-dz4z4") pod "337096ba-9b57-4846-9eb9-ca453fbd634b" (UID: "337096ba-9b57-4846-9eb9-ca453fbd634b"). InnerVolumeSpecName "kube-api-access-dz4z4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 28 21:24:28 test-preload-285104 kubelet[1094]: E0528 21:24:28.949719    1094 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 28 21:24:28 test-preload-285104 kubelet[1094]: W0528 21:24:28.949794    1094 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/337096ba-9b57-4846-9eb9-ca453fbd634b/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	May 28 21:24:28 test-preload-285104 kubelet[1094]: E0528 21:24:28.949836    1094 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/22d50d80-dd36-4a5d-bfca-7756cfd90473-config-volume podName:22d50d80-dd36-4a5d-bfca-7756cfd90473 nodeName:}" failed. No retries permitted until 2024-05-28 21:24:29.449804995 +0000 UTC m=+7.165135499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/22d50d80-dd36-4a5d-bfca-7756cfd90473-config-volume") pod "coredns-6d4b75cb6d-6b782" (UID: "22d50d80-dd36-4a5d-bfca-7756cfd90473") : object "kube-system"/"coredns" not registered
	May 28 21:24:28 test-preload-285104 kubelet[1094]: I0528 21:24:28.950231    1094 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/337096ba-9b57-4846-9eb9-ca453fbd634b-config-volume" (OuterVolumeSpecName: "config-volume") pod "337096ba-9b57-4846-9eb9-ca453fbd634b" (UID: "337096ba-9b57-4846-9eb9-ca453fbd634b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	May 28 21:24:29 test-preload-285104 kubelet[1094]: I0528 21:24:29.050865    1094 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/337096ba-9b57-4846-9eb9-ca453fbd634b-config-volume\") on node \"test-preload-285104\" DevicePath \"\""
	May 28 21:24:29 test-preload-285104 kubelet[1094]: I0528 21:24:29.050896    1094 reconciler.go:384] "Volume detached for volume \"kube-api-access-dz4z4\" (UniqueName: \"kubernetes.io/projected/337096ba-9b57-4846-9eb9-ca453fbd634b-kube-api-access-dz4z4\") on node \"test-preload-285104\" DevicePath \"\""
	May 28 21:24:29 test-preload-285104 kubelet[1094]: E0528 21:24:29.453713    1094 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 28 21:24:29 test-preload-285104 kubelet[1094]: E0528 21:24:29.453865    1094 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/22d50d80-dd36-4a5d-bfca-7756cfd90473-config-volume podName:22d50d80-dd36-4a5d-bfca-7756cfd90473 nodeName:}" failed. No retries permitted until 2024-05-28 21:24:30.453849111 +0000 UTC m=+8.169179628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/22d50d80-dd36-4a5d-bfca-7756cfd90473-config-volume") pod "coredns-6d4b75cb6d-6b782" (UID: "22d50d80-dd36-4a5d-bfca-7756cfd90473") : object "kube-system"/"coredns" not registered
	May 28 21:24:30 test-preload-285104 kubelet[1094]: E0528 21:24:30.461412    1094 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 28 21:24:30 test-preload-285104 kubelet[1094]: E0528 21:24:30.461533    1094 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/22d50d80-dd36-4a5d-bfca-7756cfd90473-config-volume podName:22d50d80-dd36-4a5d-bfca-7756cfd90473 nodeName:}" failed. No retries permitted until 2024-05-28 21:24:32.4615114 +0000 UTC m=+10.176841936 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/22d50d80-dd36-4a5d-bfca-7756cfd90473-config-volume") pod "coredns-6d4b75cb6d-6b782" (UID: "22d50d80-dd36-4a5d-bfca-7756cfd90473") : object "kube-system"/"coredns" not registered
	May 28 21:24:30 test-preload-285104 kubelet[1094]: E0528 21:24:30.509055    1094 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-6b782" podUID=22d50d80-dd36-4a5d-bfca-7756cfd90473
	May 28 21:24:30 test-preload-285104 kubelet[1094]: I0528 21:24:30.514118    1094 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=337096ba-9b57-4846-9eb9-ca453fbd634b path="/var/lib/kubelet/pods/337096ba-9b57-4846-9eb9-ca453fbd634b/volumes"
	May 28 21:24:32 test-preload-285104 kubelet[1094]: E0528 21:24:32.477997    1094 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 28 21:24:32 test-preload-285104 kubelet[1094]: E0528 21:24:32.478074    1094 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/22d50d80-dd36-4a5d-bfca-7756cfd90473-config-volume podName:22d50d80-dd36-4a5d-bfca-7756cfd90473 nodeName:}" failed. No retries permitted until 2024-05-28 21:24:36.478050684 +0000 UTC m=+14.193381200 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/22d50d80-dd36-4a5d-bfca-7756cfd90473-config-volume") pod "coredns-6d4b75cb6d-6b782" (UID: "22d50d80-dd36-4a5d-bfca-7756cfd90473") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [82d63ba3d9d1dd5b0f7e9a608f84e79f06f46a74c366aaf0b9e7b4382baeec4d] <==
	I0528 21:24:29.891392       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-285104 -n test-preload-285104
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-285104 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-285104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-285104
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-285104: (1.081308755s)
--- FAIL: TestPreload (250.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (387.73s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-314578 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-314578 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m55.297480052s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-314578] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-314578" primary control-plane node in "kubernetes-upgrade-314578" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:30:26.703217   52629 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:30:26.703502   52629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:30:26.703512   52629 out.go:304] Setting ErrFile to fd 2...
	I0528 21:30:26.703519   52629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:30:26.703682   52629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:30:26.704213   52629 out.go:298] Setting JSON to false
	I0528 21:30:26.705072   52629 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4370,"bootTime":1716927457,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:30:26.705124   52629 start.go:139] virtualization: kvm guest
	I0528 21:30:26.707202   52629 out.go:177] * [kubernetes-upgrade-314578] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:30:26.708362   52629 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:30:26.709419   52629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:30:26.708388   52629 notify.go:220] Checking for updates...
	I0528 21:30:26.711734   52629 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:30:26.712772   52629 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:30:26.713861   52629 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:30:26.715015   52629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:30:26.716527   52629 config.go:182] Loaded profile config "NoKubernetes-187083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0528 21:30:26.716656   52629 config.go:182] Loaded profile config "cert-expiration-257793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:30:26.716794   52629 config.go:182] Loaded profile config "pause-547166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:30:26.716896   52629 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:30:27.256862   52629 out.go:177] * Using the kvm2 driver based on user configuration
	I0528 21:30:27.258038   52629 start.go:297] selected driver: kvm2
	I0528 21:30:27.258050   52629 start.go:901] validating driver "kvm2" against <nil>
	I0528 21:30:27.258064   52629 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:30:27.258835   52629 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:30:27.274176   52629 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:30:27.290778   52629 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:30:27.290819   52629 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 21:30:27.291010   52629 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0528 21:30:27.291067   52629 cni.go:84] Creating CNI manager for ""
	I0528 21:30:27.291078   52629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:30:27.291088   52629 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0528 21:30:27.291140   52629 start.go:340] cluster config:
	{Name:kubernetes-upgrade-314578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-314578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:30:27.291222   52629 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:30:27.292858   52629 out.go:177] * Starting "kubernetes-upgrade-314578" primary control-plane node in "kubernetes-upgrade-314578" cluster
	I0528 21:30:27.293847   52629 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 21:30:27.293881   52629 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0528 21:30:27.293903   52629 cache.go:56] Caching tarball of preloaded images
	I0528 21:30:27.293973   52629 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:30:27.293988   52629 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0528 21:30:27.294081   52629 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/config.json ...
	I0528 21:30:27.294104   52629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/config.json: {Name:mk24f901a347eb60bda77751cf33095c815cdd60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:30:27.294261   52629 start.go:360] acquireMachinesLock for kubernetes-upgrade-314578: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:30:46.858816   52629 start.go:364] duration metric: took 19.564511446s to acquireMachinesLock for "kubernetes-upgrade-314578"
	I0528 21:30:46.858911   52629 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-314578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-314578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 21:30:46.859033   52629 start.go:125] createHost starting for "" (driver="kvm2")
	I0528 21:30:46.861108   52629 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 21:30:46.861328   52629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:30:46.861389   52629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:30:46.878334   52629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I0528 21:30:46.878675   52629 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:30:46.879179   52629 main.go:141] libmachine: Using API Version  1
	I0528 21:30:46.879203   52629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:30:46.879507   52629 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:30:46.879709   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetMachineName
	I0528 21:30:46.879847   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:30:46.879978   52629 start.go:159] libmachine.API.Create for "kubernetes-upgrade-314578" (driver="kvm2")
	I0528 21:30:46.880004   52629 client.go:168] LocalClient.Create starting
	I0528 21:30:46.880036   52629 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem
	I0528 21:30:46.880077   52629 main.go:141] libmachine: Decoding PEM data...
	I0528 21:30:46.880098   52629 main.go:141] libmachine: Parsing certificate...
	I0528 21:30:46.880164   52629 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem
	I0528 21:30:46.880193   52629 main.go:141] libmachine: Decoding PEM data...
	I0528 21:30:46.880212   52629 main.go:141] libmachine: Parsing certificate...
	I0528 21:30:46.880237   52629 main.go:141] libmachine: Running pre-create checks...
	I0528 21:30:46.880256   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .PreCreateCheck
	I0528 21:30:46.880599   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetConfigRaw
	I0528 21:30:46.880980   52629 main.go:141] libmachine: Creating machine...
	I0528 21:30:46.880994   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .Create
	I0528 21:30:46.881111   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Creating KVM machine...
	I0528 21:30:46.882270   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found existing default KVM network
	I0528 21:30:46.883637   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:46.883481   52831 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012de30}
	I0528 21:30:46.883661   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | created network xml: 
	I0528 21:30:46.883683   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | <network>
	I0528 21:30:46.883693   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG |   <name>mk-kubernetes-upgrade-314578</name>
	I0528 21:30:46.883705   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG |   <dns enable='no'/>
	I0528 21:30:46.883717   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG |   
	I0528 21:30:46.883732   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0528 21:30:46.883742   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG |     <dhcp>
	I0528 21:30:46.883754   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0528 21:30:46.883762   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG |     </dhcp>
	I0528 21:30:46.883769   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG |   </ip>
	I0528 21:30:46.883778   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG |   
	I0528 21:30:46.883787   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | </network>
	I0528 21:30:46.883801   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | 
	I0528 21:30:46.888929   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | trying to create private KVM network mk-kubernetes-upgrade-314578 192.168.39.0/24...
	I0528 21:30:46.958599   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | private KVM network mk-kubernetes-upgrade-314578 192.168.39.0/24 created
	I0528 21:30:46.958637   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Setting up store path in /home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578 ...
	I0528 21:30:46.958666   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:46.958548   52831 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:30:46.958686   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Building disk image from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 21:30:46.959223   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Downloading /home/jenkins/minikube-integration/18966-3963/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 21:30:47.198318   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:47.198178   52831 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/id_rsa...
	I0528 21:30:47.426576   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:47.426441   52831 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/kubernetes-upgrade-314578.rawdisk...
	I0528 21:30:47.426608   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Writing magic tar header
	I0528 21:30:47.426625   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Writing SSH key tar header
	I0528 21:30:47.426645   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:47.426599   52831 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578 ...
	I0528 21:30:47.426767   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578
	I0528 21:30:47.426790   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines
	I0528 21:30:47.426803   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578 (perms=drwx------)
	I0528 21:30:47.426816   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:30:47.426831   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines (perms=drwxr-xr-x)
	I0528 21:30:47.426868   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube (perms=drwxr-xr-x)
	I0528 21:30:47.426883   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963
	I0528 21:30:47.426894   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963 (perms=drwxrwxr-x)
	I0528 21:30:47.426907   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0528 21:30:47.426919   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0528 21:30:47.426937   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0528 21:30:47.426949   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Creating domain...
	I0528 21:30:47.426965   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Checking permissions on dir: /home/jenkins
	I0528 21:30:47.426980   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Checking permissions on dir: /home
	I0528 21:30:47.426993   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Skipping /home - not owner
	I0528 21:30:47.428076   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) define libvirt domain using xml: 
	I0528 21:30:47.428104   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) <domain type='kvm'>
	I0528 21:30:47.428117   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)   <name>kubernetes-upgrade-314578</name>
	I0528 21:30:47.428125   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)   <memory unit='MiB'>2200</memory>
	I0528 21:30:47.428136   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)   <vcpu>2</vcpu>
	I0528 21:30:47.428146   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)   <features>
	I0528 21:30:47.428155   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <acpi/>
	I0528 21:30:47.428165   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <apic/>
	I0528 21:30:47.428176   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <pae/>
	I0528 21:30:47.428185   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     
	I0528 21:30:47.428190   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)   </features>
	I0528 21:30:47.428199   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)   <cpu mode='host-passthrough'>
	I0528 21:30:47.428225   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)   
	I0528 21:30:47.428248   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)   </cpu>
	I0528 21:30:47.428259   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)   <os>
	I0528 21:30:47.428293   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <type>hvm</type>
	I0528 21:30:47.428307   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <boot dev='cdrom'/>
	I0528 21:30:47.428315   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <boot dev='hd'/>
	I0528 21:30:47.428345   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <bootmenu enable='no'/>
	I0528 21:30:47.428367   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)   </os>
	I0528 21:30:47.428388   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)   <devices>
	I0528 21:30:47.428407   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <disk type='file' device='cdrom'>
	I0528 21:30:47.428429   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/boot2docker.iso'/>
	I0528 21:30:47.428442   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)       <target dev='hdc' bus='scsi'/>
	I0528 21:30:47.428455   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)       <readonly/>
	I0528 21:30:47.428465   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     </disk>
	I0528 21:30:47.428474   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <disk type='file' device='disk'>
	I0528 21:30:47.428480   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0528 21:30:47.428519   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/kubernetes-upgrade-314578.rawdisk'/>
	I0528 21:30:47.428539   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)       <target dev='hda' bus='virtio'/>
	I0528 21:30:47.428553   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     </disk>
	I0528 21:30:47.428565   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <interface type='network'>
	I0528 21:30:47.428592   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)       <source network='mk-kubernetes-upgrade-314578'/>
	I0528 21:30:47.428610   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)       <model type='virtio'/>
	I0528 21:30:47.428623   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     </interface>
	I0528 21:30:47.428635   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <interface type='network'>
	I0528 21:30:47.428649   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)       <source network='default'/>
	I0528 21:30:47.428661   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)       <model type='virtio'/>
	I0528 21:30:47.428674   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     </interface>
	I0528 21:30:47.428689   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <serial type='pty'>
	I0528 21:30:47.428702   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)       <target port='0'/>
	I0528 21:30:47.428710   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     </serial>
	I0528 21:30:47.428723   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <console type='pty'>
	I0528 21:30:47.428735   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)       <target type='serial' port='0'/>
	I0528 21:30:47.428748   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     </console>
	I0528 21:30:47.428764   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     <rng model='virtio'>
	I0528 21:30:47.428778   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)       <backend model='random'>/dev/random</backend>
	I0528 21:30:47.428790   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     </rng>
	I0528 21:30:47.428801   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     
	I0528 21:30:47.428813   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)     
	I0528 21:30:47.428825   52629 main.go:141] libmachine: (kubernetes-upgrade-314578)   </devices>
	I0528 21:30:47.428840   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) </domain>
	I0528 21:30:47.428856   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) 
	I0528 21:30:47.436950   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:a8:bf:da in network default
	I0528 21:30:47.437679   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Ensuring networks are active...
	I0528 21:30:47.437707   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:30:47.438428   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Ensuring network default is active
	I0528 21:30:47.438745   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Ensuring network mk-kubernetes-upgrade-314578 is active
	I0528 21:30:47.439218   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Getting domain xml...
	I0528 21:30:47.439919   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Creating domain...
	I0528 21:30:48.819844   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Waiting to get IP...
	I0528 21:30:48.820771   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:30:48.821247   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:30:48.821277   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:48.821225   52831 retry.go:31] will retry after 235.552578ms: waiting for machine to come up
	I0528 21:30:49.058858   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:30:49.059525   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:30:49.059551   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:49.059476   52831 retry.go:31] will retry after 383.769569ms: waiting for machine to come up
	I0528 21:30:49.445230   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:30:49.445797   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:30:49.445827   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:49.445725   52831 retry.go:31] will retry after 448.944514ms: waiting for machine to come up
	I0528 21:30:49.896563   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:30:49.897094   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:30:49.897124   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:49.897050   52831 retry.go:31] will retry after 563.639869ms: waiting for machine to come up
	I0528 21:30:50.462897   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:30:50.463517   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:30:50.463549   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:50.463467   52831 retry.go:31] will retry after 596.266832ms: waiting for machine to come up
	I0528 21:30:51.061230   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:30:51.061638   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:30:51.061657   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:51.061608   52831 retry.go:31] will retry after 925.883725ms: waiting for machine to come up
	I0528 21:30:51.989501   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:30:51.990069   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:30:51.990108   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:51.990018   52831 retry.go:31] will retry after 833.857038ms: waiting for machine to come up
	I0528 21:30:52.825224   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:30:52.825608   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:30:52.825637   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:52.825556   52831 retry.go:31] will retry after 946.792ms: waiting for machine to come up
	I0528 21:30:53.773725   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:30:53.774329   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:30:53.774351   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:53.774294   52831 retry.go:31] will retry after 1.342793537s: waiting for machine to come up
	I0528 21:30:55.118291   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:30:55.118771   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:30:55.118803   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:55.118716   52831 retry.go:31] will retry after 1.955917049s: waiting for machine to come up
	I0528 21:30:57.076279   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:30:57.076769   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:30:57.076797   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:57.076727   52831 retry.go:31] will retry after 1.918157587s: waiting for machine to come up
	I0528 21:30:58.997205   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:30:58.997843   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:30:58.997877   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:30:58.997733   52831 retry.go:31] will retry after 2.234353876s: waiting for machine to come up
	I0528 21:31:01.234858   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:01.235341   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:31:01.235363   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:31:01.235296   52831 retry.go:31] will retry after 4.480472141s: waiting for machine to come up
	I0528 21:31:05.718784   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:05.719163   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find current IP address of domain kubernetes-upgrade-314578 in network mk-kubernetes-upgrade-314578
	I0528 21:31:05.719187   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | I0528 21:31:05.719112   52831 retry.go:31] will retry after 5.240151135s: waiting for machine to come up
	I0528 21:31:10.964011   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:10.964618   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Found IP for machine: 192.168.39.174
	I0528 21:31:10.964663   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has current primary IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:10.964675   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Reserving static IP address...
	I0528 21:31:10.965024   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-314578", mac: "52:54:00:f7:04:a3", ip: "192.168.39.174"} in network mk-kubernetes-upgrade-314578
	I0528 21:31:11.040471   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Getting to WaitForSSH function...
	I0528 21:31:11.040509   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Reserved static IP address: 192.168.39.174
	I0528 21:31:11.040524   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Waiting for SSH to be available...
	I0528 21:31:11.043390   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:11.043746   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578
	I0528 21:31:11.043774   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-314578 interface with MAC address 52:54:00:f7:04:a3
	I0528 21:31:11.043943   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Using SSH client type: external
	I0528 21:31:11.043970   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/id_rsa (-rw-------)
	I0528 21:31:11.043997   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 21:31:11.044018   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | About to run SSH command:
	I0528 21:31:11.044062   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | exit 0
	I0528 21:31:11.047549   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | SSH cmd err, output: exit status 255: 
	I0528 21:31:11.047593   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0528 21:31:11.047610   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | command : exit 0
	I0528 21:31:11.047623   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | err     : exit status 255
	I0528 21:31:11.047638   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | output  : 
	I0528 21:31:14.049270   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Getting to WaitForSSH function...
	I0528 21:31:14.052572   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.053028   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:14.053074   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.053244   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Using SSH client type: external
	I0528 21:31:14.053274   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/id_rsa (-rw-------)
	I0528 21:31:14.053315   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 21:31:14.053347   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | About to run SSH command:
	I0528 21:31:14.053363   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | exit 0
	I0528 21:31:14.186122   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | SSH cmd err, output: <nil>: 
	I0528 21:31:14.186340   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) KVM machine creation complete!
	I0528 21:31:14.186690   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetConfigRaw
	I0528 21:31:14.187227   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:31:14.187406   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:31:14.187588   52629 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0528 21:31:14.187602   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetState
	I0528 21:31:14.188935   52629 main.go:141] libmachine: Detecting operating system of created instance...
	I0528 21:31:14.188948   52629 main.go:141] libmachine: Waiting for SSH to be available...
	I0528 21:31:14.188953   52629 main.go:141] libmachine: Getting to WaitForSSH function...
	I0528 21:31:14.188959   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:31:14.191759   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.192066   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:14.192086   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.192252   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:31:14.192450   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:14.192631   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:14.192776   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:31:14.193006   52629 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:14.193217   52629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0528 21:31:14.193230   52629 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0528 21:31:14.296949   52629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:31:14.296980   52629 main.go:141] libmachine: Detecting the provisioner...
	I0528 21:31:14.297002   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:31:14.300035   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.300377   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:14.300410   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.300540   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:31:14.300722   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:14.300878   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:14.301004   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:31:14.301185   52629 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:14.301394   52629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0528 21:31:14.301406   52629 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0528 21:31:14.410786   52629 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0528 21:31:14.410870   52629 main.go:141] libmachine: found compatible host: buildroot
	I0528 21:31:14.410883   52629 main.go:141] libmachine: Provisioning with buildroot...
	I0528 21:31:14.410895   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetMachineName
	I0528 21:31:14.411170   52629 buildroot.go:166] provisioning hostname "kubernetes-upgrade-314578"
	I0528 21:31:14.411194   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetMachineName
	I0528 21:31:14.411389   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:31:14.414157   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.414500   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:14.414534   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.414669   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:31:14.414887   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:14.415064   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:14.415243   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:31:14.415424   52629 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:14.415621   52629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0528 21:31:14.415639   52629 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-314578 && echo "kubernetes-upgrade-314578" | sudo tee /etc/hostname
	I0528 21:31:14.537092   52629 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-314578
	
	I0528 21:31:14.537118   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:31:14.539632   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.539998   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:14.540034   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.540211   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:31:14.540431   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:14.540598   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:14.540777   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:31:14.540968   52629 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:14.541134   52629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0528 21:31:14.541149   52629 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-314578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-314578/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-314578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:31:14.654612   52629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:31:14.654644   52629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:31:14.654700   52629 buildroot.go:174] setting up certificates
	I0528 21:31:14.654724   52629 provision.go:84] configureAuth start
	I0528 21:31:14.654737   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetMachineName
	I0528 21:31:14.655012   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetIP
	I0528 21:31:14.657509   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.657999   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:14.658023   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.658271   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:31:14.660640   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.660922   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:14.660950   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.661096   52629 provision.go:143] copyHostCerts
	I0528 21:31:14.661170   52629 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:31:14.661183   52629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:31:14.661241   52629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:31:14.661352   52629 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:31:14.661363   52629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:31:14.661393   52629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:31:14.661479   52629 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:31:14.661487   52629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:31:14.661511   52629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:31:14.661605   52629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-314578 san=[127.0.0.1 192.168.39.174 kubernetes-upgrade-314578 localhost minikube]
	I0528 21:31:14.874810   52629 provision.go:177] copyRemoteCerts
	I0528 21:31:14.874863   52629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:31:14.874890   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:31:14.877416   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.877652   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:14.877678   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:14.877901   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:31:14.878154   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:14.878342   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:31:14.878493   52629 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/id_rsa Username:docker}
	I0528 21:31:14.966090   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:31:14.994465   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0528 21:31:15.018698   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 21:31:15.043759   52629 provision.go:87] duration metric: took 389.023333ms to configureAuth
	I0528 21:31:15.043802   52629 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:31:15.044029   52629 config.go:182] Loaded profile config "kubernetes-upgrade-314578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0528 21:31:15.044129   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:31:15.046779   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.047149   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:15.047177   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.047417   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:31:15.047641   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:15.047816   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:15.047943   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:31:15.048070   52629 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:15.048268   52629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0528 21:31:15.048297   52629 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:31:15.319668   52629 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:31:15.319697   52629 main.go:141] libmachine: Checking connection to Docker...
	I0528 21:31:15.319716   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetURL
	I0528 21:31:15.321133   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | Using libvirt version 6000000
	I0528 21:31:15.323548   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.323940   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:15.323988   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.324175   52629 main.go:141] libmachine: Docker is up and running!
	I0528 21:31:15.324194   52629 main.go:141] libmachine: Reticulating splines...
	I0528 21:31:15.324202   52629 client.go:171] duration metric: took 28.444188578s to LocalClient.Create
	I0528 21:31:15.324227   52629 start.go:167] duration metric: took 28.444249469s to libmachine.API.Create "kubernetes-upgrade-314578"
	I0528 21:31:15.324237   52629 start.go:293] postStartSetup for "kubernetes-upgrade-314578" (driver="kvm2")
	I0528 21:31:15.324248   52629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:31:15.324266   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:31:15.324502   52629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:31:15.324528   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:31:15.326861   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.327250   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:15.327272   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.327383   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:31:15.327565   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:15.327704   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:31:15.327882   52629 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/id_rsa Username:docker}
	I0528 21:31:15.417093   52629 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:31:15.421397   52629 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 21:31:15.421419   52629 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:31:15.421498   52629 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:31:15.421604   52629 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:31:15.421736   52629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:31:15.432089   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:31:15.458009   52629 start.go:296] duration metric: took 133.75728ms for postStartSetup
	I0528 21:31:15.458066   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetConfigRaw
	I0528 21:31:15.458963   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetIP
	I0528 21:31:15.462617   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.462902   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:15.462927   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.463152   52629 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/config.json ...
	I0528 21:31:15.463372   52629 start.go:128] duration metric: took 28.60432784s to createHost
	I0528 21:31:15.463417   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:31:15.465832   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.466165   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:15.466197   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.466365   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:31:15.466545   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:15.466741   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:15.466892   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:31:15.467059   52629 main.go:141] libmachine: Using SSH client type: native
	I0528 21:31:15.467203   52629 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0528 21:31:15.467212   52629 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0528 21:31:15.578679   52629 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716931875.557615802
	
	I0528 21:31:15.578705   52629 fix.go:216] guest clock: 1716931875.557615802
	I0528 21:31:15.578712   52629 fix.go:229] Guest: 2024-05-28 21:31:15.557615802 +0000 UTC Remote: 2024-05-28 21:31:15.463384894 +0000 UTC m=+48.794805794 (delta=94.230908ms)
	I0528 21:31:15.578731   52629 fix.go:200] guest clock delta is within tolerance: 94.230908ms
	I0528 21:31:15.578736   52629 start.go:83] releasing machines lock for "kubernetes-upgrade-314578", held for 28.719869138s
	I0528 21:31:15.578765   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:31:15.579039   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetIP
	I0528 21:31:15.582243   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.582654   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:15.582686   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.582868   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:31:15.583514   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:31:15.583691   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:31:15.583796   52629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:31:15.583833   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:31:15.583847   52629 ssh_runner.go:195] Run: cat /version.json
	I0528 21:31:15.583871   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:31:15.587881   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.588027   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.588317   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:15.588365   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.588442   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:15.588490   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:31:15.588505   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:15.588689   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:31:15.588718   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:15.588862   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:31:15.588862   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:31:15.589028   52629 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/id_rsa Username:docker}
	I0528 21:31:15.589044   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:31:15.589189   52629 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/id_rsa Username:docker}
	I0528 21:31:15.694476   52629 ssh_runner.go:195] Run: systemctl --version
	I0528 21:31:15.700673   52629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:31:15.867347   52629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 21:31:15.874310   52629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:31:15.874390   52629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:31:15.893990   52629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 21:31:15.894013   52629 start.go:494] detecting cgroup driver to use...
	I0528 21:31:15.894095   52629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:31:15.914209   52629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:31:15.932001   52629 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:31:15.932064   52629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:31:15.948593   52629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:31:15.965731   52629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:31:16.108750   52629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:31:16.282173   52629 docker.go:233] disabling docker service ...
	I0528 21:31:16.282241   52629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:31:16.297687   52629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:31:16.311049   52629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:31:16.459200   52629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:31:16.599744   52629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:31:16.615385   52629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:31:16.638079   52629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0528 21:31:16.638149   52629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:16.651359   52629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:31:16.651435   52629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:16.663601   52629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:16.682058   52629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:31:16.694225   52629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:31:16.707011   52629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:31:16.719494   52629 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 21:31:16.719577   52629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 21:31:16.734723   52629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:31:16.745596   52629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:31:16.884391   52629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:31:17.043954   52629 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:31:17.044043   52629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:31:17.049789   52629 start.go:562] Will wait 60s for crictl version
	I0528 21:31:17.049860   52629 ssh_runner.go:195] Run: which crictl
	I0528 21:31:17.053926   52629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:31:17.100400   52629 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 21:31:17.100476   52629 ssh_runner.go:195] Run: crio --version
	I0528 21:31:17.132047   52629 ssh_runner.go:195] Run: crio --version
	I0528 21:31:17.227032   52629 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0528 21:31:17.300658   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetIP
	I0528 21:31:17.304330   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:17.304778   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:31:01 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:31:17.304805   52629 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:31:17.305051   52629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 21:31:17.310046   52629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:31:17.328473   52629 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-314578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-314578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:31:17.328574   52629 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 21:31:17.328633   52629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:31:17.370450   52629 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0528 21:31:17.370532   52629 ssh_runner.go:195] Run: which lz4
	I0528 21:31:17.375076   52629 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0528 21:31:17.379743   52629 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 21:31:17.379800   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0528 21:31:19.202950   52629 crio.go:462] duration metric: took 1.827909345s to copy over tarball
	I0528 21:31:19.203037   52629 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 21:31:21.977430   52629 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.774357517s)
	I0528 21:31:21.977458   52629 crio.go:469] duration metric: took 2.774476103s to extract the tarball
	I0528 21:31:21.977467   52629 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 21:31:22.023423   52629 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:31:22.070631   52629 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0528 21:31:22.070658   52629 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0528 21:31:22.070742   52629 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:31:22.070779   52629 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:31:22.070797   52629 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:31:22.070803   52629 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0528 21:31:22.070754   52629 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:31:22.070803   52629 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0528 21:31:22.070854   52629 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0528 21:31:22.070775   52629 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:31:22.072408   52629 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:31:22.072430   52629 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:31:22.072408   52629 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:31:22.072408   52629 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:31:22.072461   52629 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0528 21:31:22.072463   52629 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0528 21:31:22.072576   52629 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0528 21:31:22.072734   52629 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:31:22.216234   52629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:31:22.216637   52629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0528 21:31:22.231468   52629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0528 21:31:22.251463   52629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0528 21:31:22.256152   52629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:31:22.275970   52629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:31:22.291778   52629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:31:22.300308   52629 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0528 21:31:22.300408   52629 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:31:22.300467   52629 ssh_runner.go:195] Run: which crictl
	I0528 21:31:22.337409   52629 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0528 21:31:22.337456   52629 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0528 21:31:22.337492   52629 ssh_runner.go:195] Run: which crictl
	I0528 21:31:22.399021   52629 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0528 21:31:22.399068   52629 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0528 21:31:22.399066   52629 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0528 21:31:22.399099   52629 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0528 21:31:22.399115   52629 ssh_runner.go:195] Run: which crictl
	I0528 21:31:22.399136   52629 ssh_runner.go:195] Run: which crictl
	I0528 21:31:22.407060   52629 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0528 21:31:22.407102   52629 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:31:22.407150   52629 ssh_runner.go:195] Run: which crictl
	I0528 21:31:22.440900   52629 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0528 21:31:22.440951   52629 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:31:22.440956   52629 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0528 21:31:22.440991   52629 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:31:22.441003   52629 ssh_runner.go:195] Run: which crictl
	I0528 21:31:22.441005   52629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0528 21:31:22.441020   52629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0528 21:31:22.441028   52629 ssh_runner.go:195] Run: which crictl
	I0528 21:31:22.440961   52629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:31:22.441067   52629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0528 21:31:22.441108   52629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:31:22.445299   52629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:31:22.589445   52629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0528 21:31:22.589510   52629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0528 21:31:22.589524   52629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0528 21:31:22.589578   52629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0528 21:31:22.589636   52629 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:31:22.589638   52629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0528 21:31:22.589674   52629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0528 21:31:22.622689   52629 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0528 21:31:23.016854   52629 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:31:23.178372   52629 cache_images.go:92] duration metric: took 1.107693696s to LoadCachedImages
	W0528 21:31:23.178459   52629 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0528 21:31:23.178473   52629 kubeadm.go:928] updating node { 192.168.39.174 8443 v1.20.0 crio true true} ...
	I0528 21:31:23.178626   52629 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-314578 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-314578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:31:23.178716   52629 ssh_runner.go:195] Run: crio config
	I0528 21:31:23.234157   52629 cni.go:84] Creating CNI manager for ""
	I0528 21:31:23.234186   52629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:31:23.234200   52629 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:31:23.234224   52629 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-314578 NodeName:kubernetes-upgrade-314578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0528 21:31:23.234434   52629 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-314578"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:31:23.234508   52629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0528 21:31:23.246028   52629 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:31:23.246140   52629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:31:23.256473   52629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0528 21:31:23.275748   52629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:31:23.297044   52629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0528 21:31:23.316411   52629 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0528 21:31:23.320742   52629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:31:23.335339   52629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:31:23.470027   52629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:31:23.488860   52629 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578 for IP: 192.168.39.174
	I0528 21:31:23.488885   52629 certs.go:194] generating shared ca certs ...
	I0528 21:31:23.488905   52629 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:23.489088   52629 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 21:31:23.489134   52629 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 21:31:23.489146   52629 certs.go:256] generating profile certs ...
	I0528 21:31:23.489203   52629 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/client.key
	I0528 21:31:23.489217   52629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/client.crt with IP's: []
	I0528 21:31:23.677314   52629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/client.crt ...
	I0528 21:31:23.677346   52629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/client.crt: {Name:mkc90af215725cc14f59abae22aa9236a6ef29e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:23.677544   52629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/client.key ...
	I0528 21:31:23.677569   52629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/client.key: {Name:mk607b751231708844f51ce0b905e028af2d9f57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:23.677690   52629 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.key.5d9c2211
	I0528 21:31:23.677714   52629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.crt.5d9c2211 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.174]
	I0528 21:31:23.912410   52629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.crt.5d9c2211 ...
	I0528 21:31:23.912441   52629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.crt.5d9c2211: {Name:mkced81f4fe9655effb7f723a565ecf4db91ec12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:23.912632   52629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.key.5d9c2211 ...
	I0528 21:31:23.912652   52629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.key.5d9c2211: {Name:mk77d556528bb631ab6d48747338c260a6f0daa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:23.912759   52629 certs.go:381] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.crt.5d9c2211 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.crt
	I0528 21:31:23.912863   52629 certs.go:385] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.key.5d9c2211 -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.key
	I0528 21:31:23.912943   52629 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/proxy-client.key
	I0528 21:31:23.912964   52629 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/proxy-client.crt with IP's: []
	I0528 21:31:23.968738   52629 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/proxy-client.crt ...
	I0528 21:31:23.968769   52629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/proxy-client.crt: {Name:mk058f719fbde135620532cac4bea22cfea9c425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:23.968947   52629 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/proxy-client.key ...
	I0528 21:31:23.968966   52629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/proxy-client.key: {Name:mk06710c93966e04c9585be981893e1ab903c73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:31:23.969187   52629 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 21:31:23.969232   52629 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 21:31:23.969247   52629 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 21:31:23.969276   52629 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 21:31:23.969305   52629 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:31:23.969337   52629 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 21:31:23.969387   52629 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:31:23.970002   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:31:24.000854   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:31:24.028147   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:31:24.058260   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:31:24.087616   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0528 21:31:24.116683   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 21:31:24.144863   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:31:24.173716   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 21:31:24.202265   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 21:31:24.232875   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 21:31:24.264109   52629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:31:24.296540   52629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:31:24.316537   52629 ssh_runner.go:195] Run: openssl version
	I0528 21:31:24.328913   52629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 21:31:24.343944   52629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 21:31:24.351323   52629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:31:24.351399   52629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 21:31:24.360363   52629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 21:31:24.375362   52629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:31:24.396306   52629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:31:24.403489   52629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:31:24.403563   52629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:31:24.416478   52629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:31:24.429540   52629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 21:31:24.441497   52629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 21:31:24.446666   52629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:31:24.446729   52629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 21:31:24.452996   52629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 21:31:24.466014   52629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:31:24.470634   52629 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 21:31:24.470707   52629 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-314578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-314578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:31:24.470788   52629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:31:24.470869   52629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:31:24.515760   52629 cri.go:89] found id: ""
	I0528 21:31:24.515842   52629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 21:31:24.526559   52629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:31:24.538007   52629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:31:24.548488   52629 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:31:24.548509   52629 kubeadm.go:156] found existing configuration files:
	
	I0528 21:31:24.548549   52629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:31:24.560310   52629 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:31:24.560376   52629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:31:24.572977   52629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:31:24.582542   52629 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:31:24.582596   52629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:31:24.592216   52629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:31:24.601024   52629 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:31:24.601069   52629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:31:24.611023   52629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:31:24.620167   52629 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:31:24.620217   52629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:31:24.630439   52629 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 21:31:24.765107   52629 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0528 21:31:24.765525   52629 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 21:31:24.927798   52629 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 21:31:24.927965   52629 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 21:31:24.928089   52629 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 21:31:25.130633   52629 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 21:31:25.133498   52629 out.go:204]   - Generating certificates and keys ...
	I0528 21:31:25.133614   52629 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 21:31:25.133699   52629 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 21:31:25.301118   52629 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 21:31:25.459823   52629 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 21:31:25.570869   52629 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 21:31:25.739466   52629 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 21:31:25.900421   52629 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 21:31:25.900644   52629 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-314578 localhost] and IPs [192.168.39.174 127.0.0.1 ::1]
	I0528 21:31:26.407255   52629 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 21:31:26.407473   52629 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-314578 localhost] and IPs [192.168.39.174 127.0.0.1 ::1]
	I0528 21:31:26.747341   52629 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 21:31:27.146631   52629 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 21:31:27.410245   52629 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 21:31:27.410482   52629 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 21:31:27.539810   52629 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 21:31:27.700974   52629 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 21:31:27.844847   52629 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 21:31:28.037419   52629 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 21:31:28.063550   52629 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 21:31:28.064665   52629 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 21:31:28.064836   52629 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 21:31:28.216201   52629 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 21:31:28.218378   52629 out.go:204]   - Booting up control plane ...
	I0528 21:31:28.218504   52629 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 21:31:28.227486   52629 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 21:31:28.228787   52629 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 21:31:28.230033   52629 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 21:31:28.238541   52629 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0528 21:32:08.234514   52629 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:32:08.235860   52629 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:32:08.236162   52629 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:32:13.236046   52629 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:32:13.236294   52629 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:32:23.235647   52629 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:32:23.235924   52629 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:32:43.235439   52629 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:32:43.235648   52629 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:33:23.237147   52629 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:33:23.237370   52629 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:33:23.237396   52629 kubeadm.go:309] 
	I0528 21:33:23.237445   52629 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:33:23.237549   52629 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:33:23.237569   52629 kubeadm.go:309] 
	I0528 21:33:23.237619   52629 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:33:23.237662   52629 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:33:23.237809   52629 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:33:23.237823   52629 kubeadm.go:309] 
	I0528 21:33:23.237957   52629 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:33:23.238010   52629 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:33:23.238056   52629 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:33:23.238067   52629 kubeadm.go:309] 
	I0528 21:33:23.238220   52629 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:33:23.238344   52629 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:33:23.238358   52629 kubeadm.go:309] 
	I0528 21:33:23.238496   52629 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:33:23.238617   52629 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:33:23.238720   52629 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:33:23.238823   52629 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:33:23.238832   52629 kubeadm.go:309] 
	I0528 21:33:23.239928   52629 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:33:23.240027   52629 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:33:23.240118   52629 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0528 21:33:23.240228   52629 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-314578 localhost] and IPs [192.168.39.174 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-314578 localhost] and IPs [192.168.39.174 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-314578 localhost] and IPs [192.168.39.174 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-314578 localhost] and IPs [192.168.39.174 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0528 21:33:23.240287   52629 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0528 21:33:25.053801   52629 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.813460757s)
	I0528 21:33:25.053885   52629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:33:25.067899   52629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:33:25.077958   52629 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:33:25.077991   52629 kubeadm.go:156] found existing configuration files:
	
	I0528 21:33:25.078046   52629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:33:25.087694   52629 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:33:25.087755   52629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:33:25.097855   52629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:33:25.107149   52629 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:33:25.107215   52629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:33:25.116949   52629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:33:25.126047   52629 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:33:25.126106   52629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:33:25.135227   52629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:33:25.143976   52629 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:33:25.144038   52629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:33:25.153187   52629 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 21:33:25.381049   52629 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:35:21.355402   52629 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:35:21.355503   52629 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0528 21:35:21.357048   52629 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0528 21:35:21.357101   52629 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 21:35:21.357211   52629 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 21:35:21.357336   52629 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 21:35:21.357456   52629 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 21:35:21.357559   52629 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 21:35:21.359230   52629 out.go:204]   - Generating certificates and keys ...
	I0528 21:35:21.359318   52629 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 21:35:21.359493   52629 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 21:35:21.359564   52629 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 21:35:21.359612   52629 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0528 21:35:21.359669   52629 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0528 21:35:21.359738   52629 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0528 21:35:21.359830   52629 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0528 21:35:21.359916   52629 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0528 21:35:21.360031   52629 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 21:35:21.360122   52629 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 21:35:21.360191   52629 kubeadm.go:309] [certs] Using the existing "sa" key
	I0528 21:35:21.360260   52629 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 21:35:21.360338   52629 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 21:35:21.360418   52629 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 21:35:21.360512   52629 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 21:35:21.360592   52629 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 21:35:21.360721   52629 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 21:35:21.360802   52629 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 21:35:21.360859   52629 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 21:35:21.360949   52629 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 21:35:21.363221   52629 out.go:204]   - Booting up control plane ...
	I0528 21:35:21.363319   52629 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 21:35:21.363385   52629 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 21:35:21.363442   52629 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 21:35:21.363537   52629 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 21:35:21.363759   52629 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0528 21:35:21.363823   52629 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:35:21.363901   52629 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:35:21.364075   52629 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:35:21.364162   52629 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:35:21.364340   52629 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:35:21.364427   52629 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:35:21.364635   52629 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:35:21.364706   52629 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:35:21.364876   52629 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:35:21.364953   52629 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:35:21.365145   52629 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:35:21.365160   52629 kubeadm.go:309] 
	I0528 21:35:21.365215   52629 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:35:21.365278   52629 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:35:21.365287   52629 kubeadm.go:309] 
	I0528 21:35:21.365335   52629 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:35:21.365393   52629 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:35:21.365503   52629 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:35:21.365522   52629 kubeadm.go:309] 
	I0528 21:35:21.365639   52629 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:35:21.365669   52629 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:35:21.365700   52629 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:35:21.365706   52629 kubeadm.go:309] 
	I0528 21:35:21.365815   52629 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:35:21.365905   52629 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:35:21.365925   52629 kubeadm.go:309] 
	I0528 21:35:21.366070   52629 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:35:21.366214   52629 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:35:21.366323   52629 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:35:21.366382   52629 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:35:21.366416   52629 kubeadm.go:309] 
	I0528 21:35:21.366448   52629 kubeadm.go:393] duration metric: took 3m56.895747591s to StartCluster
	I0528 21:35:21.366490   52629 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:35:21.366542   52629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:35:21.411768   52629 cri.go:89] found id: ""
	I0528 21:35:21.411794   52629 logs.go:276] 0 containers: []
	W0528 21:35:21.411804   52629 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:35:21.411816   52629 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:35:21.411877   52629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:35:21.447996   52629 cri.go:89] found id: ""
	I0528 21:35:21.448027   52629 logs.go:276] 0 containers: []
	W0528 21:35:21.448038   52629 logs.go:278] No container was found matching "etcd"
	I0528 21:35:21.448045   52629 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:35:21.448117   52629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:35:21.482061   52629 cri.go:89] found id: ""
	I0528 21:35:21.482091   52629 logs.go:276] 0 containers: []
	W0528 21:35:21.482101   52629 logs.go:278] No container was found matching "coredns"
	I0528 21:35:21.482109   52629 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:35:21.482171   52629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:35:21.515148   52629 cri.go:89] found id: ""
	I0528 21:35:21.515171   52629 logs.go:276] 0 containers: []
	W0528 21:35:21.515182   52629 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:35:21.515189   52629 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:35:21.515246   52629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:35:21.549203   52629 cri.go:89] found id: ""
	I0528 21:35:21.549232   52629 logs.go:276] 0 containers: []
	W0528 21:35:21.549242   52629 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:35:21.549249   52629 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:35:21.549309   52629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:35:21.581382   52629 cri.go:89] found id: ""
	I0528 21:35:21.581414   52629 logs.go:276] 0 containers: []
	W0528 21:35:21.581423   52629 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:35:21.581430   52629 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:35:21.581491   52629 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:35:21.614661   52629 cri.go:89] found id: ""
	I0528 21:35:21.614682   52629 logs.go:276] 0 containers: []
	W0528 21:35:21.614688   52629 logs.go:278] No container was found matching "kindnet"
	I0528 21:35:21.614698   52629 logs.go:123] Gathering logs for container status ...
	I0528 21:35:21.614708   52629 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:35:21.655259   52629 logs.go:123] Gathering logs for kubelet ...
	I0528 21:35:21.655291   52629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:35:21.712148   52629 logs.go:123] Gathering logs for dmesg ...
	I0528 21:35:21.712184   52629 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:35:21.726422   52629 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:35:21.726461   52629 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:35:21.857920   52629 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:35:21.857941   52629 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:35:21.857954   52629 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0528 21:35:21.951237   52629 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0528 21:35:21.951281   52629 out.go:239] * 
	* 
	W0528 21:35:21.951349   52629 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:35:21.951372   52629 out.go:239] * 
	* 
	W0528 21:35:21.952223   52629 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 21:35:21.954728   52629 out.go:177] 
	W0528 21:35:21.955961   52629 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:35:21.956008   52629 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0528 21:35:21.956035   52629 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0528 21:35:21.957335   52629 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-314578 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-314578
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-314578: (1.356444855s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-314578 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-314578 status --format={{.Host}}: exit status 7 (63.713572ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-314578 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-314578 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.221698958s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-314578 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-314578 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-314578 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (90.91224ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-314578] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-314578
	    minikube start -p kubernetes-upgrade-314578 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3145782 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-314578 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-314578 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-314578 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (15.436381157s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-05-28 21:36:51.248422741 +0000 UTC m=+4542.004501353
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-314578 -n kubernetes-upgrade-314578
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-314578 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-314578 logs -n 25: (1.256627602s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-110727 sudo systemctl                        | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC |                     |
	|         | status docker --all --full                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo systemctl                        | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | cat docker --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo cat                              | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo docker                           | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo systemctl                        | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC |                     |
	|         | status cri-docker --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo systemctl                        | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo cat                              | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo cat                              | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo                                  | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo systemctl                        | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC |                     |
	|         | status containerd --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo systemctl                        | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | cat containerd --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo cat                              | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo cat                              | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo containerd                       | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | config dump                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo systemctl                        | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | status crio --all --full                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo systemctl                        | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | cat crio --no-pager                                  |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo find                             | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-110727 sudo crio                             | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p auto-110727                                       | auto-110727               | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	| start   | -p custom-flannel-110727                             | custom-flannel-110727     | jenkins | v1.33.1 | 28 May 24 21:35 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                           |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-314578                         | kubernetes-upgrade-314578 | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:35 UTC |
	| start   | -p kubernetes-upgrade-314578                         | kubernetes-upgrade-314578 | jenkins | v1.33.1 | 28 May 24 21:35 UTC | 28 May 24 21:36 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-314578                         | kubernetes-upgrade-314578 | jenkins | v1.33.1 | 28 May 24 21:36 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-314578                         | kubernetes-upgrade-314578 | jenkins | v1.33.1 | 28 May 24 21:36 UTC | 28 May 24 21:36 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p calico-110727 pgrep -a                            | calico-110727             | jenkins | v1.33.1 | 28 May 24 21:36 UTC | 28 May 24 21:36 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:36:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:36:35.860599   59176 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:36:35.860768   59176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:36:35.860785   59176 out.go:304] Setting ErrFile to fd 2...
	I0528 21:36:35.860791   59176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:36:35.861097   59176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:36:35.861754   59176 out.go:298] Setting JSON to false
	I0528 21:36:35.863044   59176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4739,"bootTime":1716927457,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:36:35.863120   59176 start.go:139] virtualization: kvm guest
	I0528 21:36:35.865159   59176 out.go:177] * [kubernetes-upgrade-314578] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:36:35.866815   59176 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:36:35.868182   59176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:36:35.866541   59176 notify.go:220] Checking for updates...
	I0528 21:36:35.869588   59176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:36:35.871015   59176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:36:35.872327   59176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:36:35.873524   59176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:36:35.875229   59176 config.go:182] Loaded profile config "kubernetes-upgrade-314578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:36:35.875633   59176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:36:35.875676   59176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:36:35.891747   59176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44145
	I0528 21:36:35.892192   59176 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:36:35.892721   59176 main.go:141] libmachine: Using API Version  1
	I0528 21:36:35.892739   59176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:36:35.893164   59176 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:36:35.893360   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:36:35.893642   59176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:36:35.894069   59176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:36:35.894114   59176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:36:35.909708   59176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I0528 21:36:35.910092   59176 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:36:35.910568   59176 main.go:141] libmachine: Using API Version  1
	I0528 21:36:35.910593   59176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:36:35.910890   59176 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:36:35.911097   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:36:35.952257   59176 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:36:35.953882   59176 start.go:297] selected driver: kvm2
	I0528 21:36:35.953902   59176 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-314578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-314578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:36:35.954052   59176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:36:35.954787   59176 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:36:35.954845   59176 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:36:35.970039   59176 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:36:35.970397   59176 cni.go:84] Creating CNI manager for ""
	I0528 21:36:35.970410   59176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:36:35.970446   59176 start.go:340] cluster config:
	{Name:kubernetes-upgrade-314578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-314578 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:36:35.970596   59176 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:36:35.972190   59176 out.go:177] * Starting "kubernetes-upgrade-314578" primary control-plane node in "kubernetes-upgrade-314578" cluster
	I0528 21:36:35.973790   59176 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:36:35.973833   59176 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 21:36:35.973853   59176 cache.go:56] Caching tarball of preloaded images
	I0528 21:36:35.973931   59176 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:36:35.973947   59176 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 21:36:35.974055   59176 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/config.json ...
	I0528 21:36:35.974268   59176 start.go:360] acquireMachinesLock for kubernetes-upgrade-314578: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:36:35.974321   59176 start.go:364] duration metric: took 32.257µs to acquireMachinesLock for "kubernetes-upgrade-314578"
	I0528 21:36:35.974339   59176 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:36:35.974355   59176 fix.go:54] fixHost starting: 
	I0528 21:36:35.974639   59176 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:36:35.974678   59176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:36:35.993236   59176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42397
	I0528 21:36:35.993700   59176 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:36:35.994263   59176 main.go:141] libmachine: Using API Version  1
	I0528 21:36:35.994288   59176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:36:35.994631   59176 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:36:35.994844   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:36:35.995036   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetState
	I0528 21:36:35.996772   59176 fix.go:112] recreateIfNeeded on kubernetes-upgrade-314578: state=Running err=<nil>
	W0528 21:36:35.996795   59176 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:36:35.998155   59176 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-314578" VM ...
	I0528 21:36:34.845101   57051 pod_ready.go:92] pod "kube-proxy-spqhp" in "kube-system" namespace has status "Ready":"True"
	I0528 21:36:34.845132   57051 pod_ready.go:81] duration metric: took 401.52624ms for pod "kube-proxy-spqhp" in "kube-system" namespace to be "Ready" ...
	I0528 21:36:34.845145   57051 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-110727" in "kube-system" namespace to be "Ready" ...
	I0528 21:36:35.243859   57051 pod_ready.go:92] pod "kube-scheduler-calico-110727" in "kube-system" namespace has status "Ready":"True"
	I0528 21:36:35.243887   57051 pod_ready.go:81] duration metric: took 398.730408ms for pod "kube-scheduler-calico-110727" in "kube-system" namespace to be "Ready" ...
	I0528 21:36:35.243900   57051 pod_ready.go:38] duration metric: took 23.1144695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:36:35.243919   57051 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:36:35.243978   57051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:36:35.271648   57051 api_server.go:72] duration metric: took 35.991632983s to wait for apiserver process to appear ...
	I0528 21:36:35.271683   57051 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:36:35.271705   57051 api_server.go:253] Checking apiserver healthz at https://192.168.61.34:8443/healthz ...
	I0528 21:36:35.279125   57051 api_server.go:279] https://192.168.61.34:8443/healthz returned 200:
	ok
	I0528 21:36:35.280331   57051 api_server.go:141] control plane version: v1.30.1
	I0528 21:36:35.280364   57051 api_server.go:131] duration metric: took 8.67235ms to wait for apiserver health ...
	I0528 21:36:35.280375   57051 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:36:35.448941   57051 system_pods.go:59] 10 kube-system pods found
	I0528 21:36:35.448975   57051 system_pods.go:61] "calico-kube-controllers-564985c589-fzmtc" [4571d39a-6032-4d74-8d57-baae6583396b] Running
	I0528 21:36:35.448981   57051 system_pods.go:61] "calico-node-xcx2c" [8f5ec468-e3ce-491e-ba79-4516ad0abd94] Running
	I0528 21:36:35.448985   57051 system_pods.go:61] "coredns-7db6d8ff4d-7pxj7" [f724c98b-9dc4-4a69-b6ad-20414ee10d12] Running
	I0528 21:36:35.448988   57051 system_pods.go:61] "coredns-7db6d8ff4d-c4j9k" [47c0ba42-548d-4b67-988d-47c262e3711a] Running
	I0528 21:36:35.448991   57051 system_pods.go:61] "etcd-calico-110727" [ea8b7311-80a3-4b1f-a756-252dd97d5f40] Running
	I0528 21:36:35.448994   57051 system_pods.go:61] "kube-apiserver-calico-110727" [9028b841-d34f-4744-85e7-bc9b8e1627fe] Running
	I0528 21:36:35.448997   57051 system_pods.go:61] "kube-controller-manager-calico-110727" [8a6eb4d9-d70e-411a-8c20-8cc2ebdfd8b9] Running
	I0528 21:36:35.449001   57051 system_pods.go:61] "kube-proxy-spqhp" [386a1059-c7dc-4201-b7c9-e4917d119da5] Running
	I0528 21:36:35.449005   57051 system_pods.go:61] "kube-scheduler-calico-110727" [a06d415a-2224-488f-9ede-daed682f0ffe] Running
	I0528 21:36:35.449009   57051 system_pods.go:61] "storage-provisioner" [fb0655d4-f64e-4c5b-88bb-5563727fbf37] Running
	I0528 21:36:35.449017   57051 system_pods.go:74] duration metric: took 168.635712ms to wait for pod list to return data ...
	I0528 21:36:35.449026   57051 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:36:35.643065   57051 default_sa.go:45] found service account: "default"
	I0528 21:36:35.643092   57051 default_sa.go:55] duration metric: took 194.058025ms for default service account to be created ...
	I0528 21:36:35.643101   57051 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:36:35.850491   57051 system_pods.go:86] 10 kube-system pods found
	I0528 21:36:35.850520   57051 system_pods.go:89] "calico-kube-controllers-564985c589-fzmtc" [4571d39a-6032-4d74-8d57-baae6583396b] Running
	I0528 21:36:35.850528   57051 system_pods.go:89] "calico-node-xcx2c" [8f5ec468-e3ce-491e-ba79-4516ad0abd94] Running
	I0528 21:36:35.850535   57051 system_pods.go:89] "coredns-7db6d8ff4d-7pxj7" [f724c98b-9dc4-4a69-b6ad-20414ee10d12] Running
	I0528 21:36:35.850541   57051 system_pods.go:89] "coredns-7db6d8ff4d-c4j9k" [47c0ba42-548d-4b67-988d-47c262e3711a] Running
	I0528 21:36:35.850549   57051 system_pods.go:89] "etcd-calico-110727" [ea8b7311-80a3-4b1f-a756-252dd97d5f40] Running
	I0528 21:36:35.850555   57051 system_pods.go:89] "kube-apiserver-calico-110727" [9028b841-d34f-4744-85e7-bc9b8e1627fe] Running
	I0528 21:36:35.850561   57051 system_pods.go:89] "kube-controller-manager-calico-110727" [8a6eb4d9-d70e-411a-8c20-8cc2ebdfd8b9] Running
	I0528 21:36:35.850568   57051 system_pods.go:89] "kube-proxy-spqhp" [386a1059-c7dc-4201-b7c9-e4917d119da5] Running
	I0528 21:36:35.850582   57051 system_pods.go:89] "kube-scheduler-calico-110727" [a06d415a-2224-488f-9ede-daed682f0ffe] Running
	I0528 21:36:35.850589   57051 system_pods.go:89] "storage-provisioner" [fb0655d4-f64e-4c5b-88bb-5563727fbf37] Running
	I0528 21:36:35.850597   57051 system_pods.go:126] duration metric: took 207.48853ms to wait for k8s-apps to be running ...
	I0528 21:36:35.850606   57051 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:36:35.850655   57051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:36:35.869533   57051 system_svc.go:56] duration metric: took 18.923417ms WaitForService to wait for kubelet
	I0528 21:36:35.869554   57051 kubeadm.go:576] duration metric: took 36.589542737s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:36:35.869576   57051 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:36:36.043242   57051 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:36:36.043272   57051 node_conditions.go:123] node cpu capacity is 2
	I0528 21:36:36.043287   57051 node_conditions.go:105] duration metric: took 173.702133ms to run NodePressure ...
	I0528 21:36:36.043301   57051 start.go:240] waiting for startup goroutines ...
	I0528 21:36:36.043314   57051 start.go:245] waiting for cluster config update ...
	I0528 21:36:36.043332   57051 start.go:254] writing updated cluster config ...
	I0528 21:36:36.043609   57051 ssh_runner.go:195] Run: rm -f paused
	I0528 21:36:36.098513   57051 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:36:36.100466   57051 out.go:177] * Done! kubectl is now configured to use "calico-110727" cluster and "default" namespace by default
	I0528 21:36:34.105287   58273 node_ready.go:53] node "custom-flannel-110727" has status "Ready":"False"
	I0528 21:36:36.106153   58273 node_ready.go:53] node "custom-flannel-110727" has status "Ready":"False"
	I0528 21:36:37.608888   58273 node_ready.go:49] node "custom-flannel-110727" has status "Ready":"True"
	I0528 21:36:37.608930   58273 node_ready.go:38] duration metric: took 8.007001493s for node "custom-flannel-110727" to be "Ready" ...
	I0528 21:36:37.608942   58273 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:36:34.247416   53940 cri.go:89] found id: "f5c23d93df45849840c288924a91213e4118a16dca362e5de8c27a3f5dc2ea90"
	I0528 21:36:34.247428   53940 cri.go:89] found id: ""
	I0528 21:36:34.247437   53940 logs.go:276] 1 containers: [f5c23d93df45849840c288924a91213e4118a16dca362e5de8c27a3f5dc2ea90]
	I0528 21:36:34.247486   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:34.251913   53940 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:36:34.251969   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:36:34.290178   53940 cri.go:89] found id: "8d8a8c5eaac45dad52d6abbf32bc6bbe51b4145ba424b99c4329b21652154fd8"
	I0528 21:36:34.290187   53940 cri.go:89] found id: ""
	I0528 21:36:34.290194   53940 logs.go:276] 1 containers: [8d8a8c5eaac45dad52d6abbf32bc6bbe51b4145ba424b99c4329b21652154fd8]
	I0528 21:36:34.290236   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:34.294549   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:36:34.294590   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:36:34.332756   53940 cri.go:89] found id: "0bf5138f27288ab0cbd4066c2b1a2e22117358f14e41a55a44f093965eeedd2b"
	I0528 21:36:34.332769   53940 cri.go:89] found id: "07d7437db436b52ffc1890444d4c9171c35d237362b6896b4ec3807afd80136c"
	I0528 21:36:34.332778   53940 cri.go:89] found id: ""
	I0528 21:36:34.332785   53940 logs.go:276] 2 containers: [0bf5138f27288ab0cbd4066c2b1a2e22117358f14e41a55a44f093965eeedd2b 07d7437db436b52ffc1890444d4c9171c35d237362b6896b4ec3807afd80136c]
	I0528 21:36:34.332832   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:34.337430   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:34.341306   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:36:34.341361   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:36:34.380371   53940 cri.go:89] found id: "6855d61e8d8513cb64be545367a1175ca21d3ca5e1d652fb3326af59a7161916"
	I0528 21:36:34.380383   53940 cri.go:89] found id: ""
	I0528 21:36:34.380390   53940 logs.go:276] 1 containers: [6855d61e8d8513cb64be545367a1175ca21d3ca5e1d652fb3326af59a7161916]
	I0528 21:36:34.380442   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:34.384830   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:36:34.384894   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:36:34.422299   53940 cri.go:89] found id: "cd48d9a682f35098694aa404e88bada604d846500b0d8522b2ffdaac02372052"
	I0528 21:36:34.422315   53940 cri.go:89] found id: ""
	I0528 21:36:34.422322   53940 logs.go:276] 1 containers: [cd48d9a682f35098694aa404e88bada604d846500b0d8522b2ffdaac02372052]
	I0528 21:36:34.422383   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:34.427027   53940 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:36:34.427088   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:36:34.468678   53940 cri.go:89] found id: ""
	I0528 21:36:34.468693   53940 logs.go:276] 0 containers: []
	W0528 21:36:34.468701   53940 logs.go:278] No container was found matching "kindnet"
	I0528 21:36:34.468707   53940 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:36:34.468764   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:36:34.514897   53940 cri.go:89] found id: "92d9169ab6ccab7ee6b0266d51f9372a739795eecb87253521e7fb9aa8ef7788"
	I0528 21:36:34.514911   53940 cri.go:89] found id: ""
	I0528 21:36:34.514917   53940 logs.go:276] 1 containers: [92d9169ab6ccab7ee6b0266d51f9372a739795eecb87253521e7fb9aa8ef7788]
	I0528 21:36:34.514966   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:34.519963   53940 logs.go:123] Gathering logs for kube-controller-manager [cd48d9a682f35098694aa404e88bada604d846500b0d8522b2ffdaac02372052] ...
	I0528 21:36:34.519979   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd48d9a682f35098694aa404e88bada604d846500b0d8522b2ffdaac02372052"
	I0528 21:36:34.560714   53940 logs.go:123] Gathering logs for storage-provisioner [92d9169ab6ccab7ee6b0266d51f9372a739795eecb87253521e7fb9aa8ef7788] ...
	I0528 21:36:34.560735   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92d9169ab6ccab7ee6b0266d51f9372a739795eecb87253521e7fb9aa8ef7788"
	I0528 21:36:34.603522   53940 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:36:34.603540   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:36:34.944610   53940 logs.go:123] Gathering logs for kubelet ...
	I0528 21:36:34.944625   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:36:35.060729   53940 logs.go:123] Gathering logs for dmesg ...
	I0528 21:36:35.060754   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:36:35.079324   53940 logs.go:123] Gathering logs for kube-apiserver [8be208bebdfe03545df7824da44f7afc62d5b7ab3cf1b0017e6ddeb8da35bd1d] ...
	I0528 21:36:35.079346   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be208bebdfe03545df7824da44f7afc62d5b7ab3cf1b0017e6ddeb8da35bd1d"
	I0528 21:36:35.123816   53940 logs.go:123] Gathering logs for etcd [f5c23d93df45849840c288924a91213e4118a16dca362e5de8c27a3f5dc2ea90] ...
	I0528 21:36:35.123834   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5c23d93df45849840c288924a91213e4118a16dca362e5de8c27a3f5dc2ea90"
	I0528 21:36:35.169454   53940 logs.go:123] Gathering logs for kube-proxy [6855d61e8d8513cb64be545367a1175ca21d3ca5e1d652fb3326af59a7161916] ...
	I0528 21:36:35.169470   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6855d61e8d8513cb64be545367a1175ca21d3ca5e1d652fb3326af59a7161916"
	I0528 21:36:35.212932   53940 logs.go:123] Gathering logs for container status ...
	I0528 21:36:35.212952   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:36:35.272285   53940 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:36:35.272302   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:36:35.347507   53940 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:36:35.347520   53940 logs.go:123] Gathering logs for coredns [8d8a8c5eaac45dad52d6abbf32bc6bbe51b4145ba424b99c4329b21652154fd8] ...
	I0528 21:36:35.347530   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8a8c5eaac45dad52d6abbf32bc6bbe51b4145ba424b99c4329b21652154fd8"
	I0528 21:36:35.387945   53940 logs.go:123] Gathering logs for kube-scheduler [0bf5138f27288ab0cbd4066c2b1a2e22117358f14e41a55a44f093965eeedd2b] ...
	I0528 21:36:35.387965   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0bf5138f27288ab0cbd4066c2b1a2e22117358f14e41a55a44f093965eeedd2b"
	I0528 21:36:35.476306   53940 logs.go:123] Gathering logs for kube-scheduler [07d7437db436b52ffc1890444d4c9171c35d237362b6896b4ec3807afd80136c] ...
	I0528 21:36:35.476330   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d7437db436b52ffc1890444d4c9171c35d237362b6896b4ec3807afd80136c"
	I0528 21:36:38.025428   53940 api_server.go:253] Checking apiserver healthz at https://192.168.72.246:8443/healthz ...
	I0528 21:36:38.026047   53940 api_server.go:269] stopped: https://192.168.72.246:8443/healthz: Get "https://192.168.72.246:8443/healthz": dial tcp 192.168.72.246:8443: connect: connection refused
	I0528 21:36:38.026092   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:36:38.026137   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:36:38.076138   53940 cri.go:89] found id: "8be208bebdfe03545df7824da44f7afc62d5b7ab3cf1b0017e6ddeb8da35bd1d"
	I0528 21:36:38.076152   53940 cri.go:89] found id: ""
	I0528 21:36:38.076160   53940 logs.go:276] 1 containers: [8be208bebdfe03545df7824da44f7afc62d5b7ab3cf1b0017e6ddeb8da35bd1d]
	I0528 21:36:38.076217   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:38.082128   53940 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:36:38.082197   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:36:38.134296   53940 cri.go:89] found id: "f5c23d93df45849840c288924a91213e4118a16dca362e5de8c27a3f5dc2ea90"
	I0528 21:36:38.134309   53940 cri.go:89] found id: ""
	I0528 21:36:38.134316   53940 logs.go:276] 1 containers: [f5c23d93df45849840c288924a91213e4118a16dca362e5de8c27a3f5dc2ea90]
	I0528 21:36:38.134369   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:38.139345   53940 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:36:38.139393   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:36:38.191884   53940 cri.go:89] found id: "8d8a8c5eaac45dad52d6abbf32bc6bbe51b4145ba424b99c4329b21652154fd8"
	I0528 21:36:38.191897   53940 cri.go:89] found id: ""
	I0528 21:36:38.191903   53940 logs.go:276] 1 containers: [8d8a8c5eaac45dad52d6abbf32bc6bbe51b4145ba424b99c4329b21652154fd8]
	I0528 21:36:38.191947   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:38.198011   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:36:38.198078   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:36:38.256144   53940 cri.go:89] found id: "0bf5138f27288ab0cbd4066c2b1a2e22117358f14e41a55a44f093965eeedd2b"
	I0528 21:36:38.256155   53940 cri.go:89] found id: "07d7437db436b52ffc1890444d4c9171c35d237362b6896b4ec3807afd80136c"
	I0528 21:36:38.256159   53940 cri.go:89] found id: ""
	I0528 21:36:38.256166   53940 logs.go:276] 2 containers: [0bf5138f27288ab0cbd4066c2b1a2e22117358f14e41a55a44f093965eeedd2b 07d7437db436b52ffc1890444d4c9171c35d237362b6896b4ec3807afd80136c]
	I0528 21:36:38.256226   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:38.263238   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:38.268564   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:36:38.268613   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:36:38.317617   53940 cri.go:89] found id: "6855d61e8d8513cb64be545367a1175ca21d3ca5e1d652fb3326af59a7161916"
	I0528 21:36:38.317631   53940 cri.go:89] found id: ""
	I0528 21:36:38.317639   53940 logs.go:276] 1 containers: [6855d61e8d8513cb64be545367a1175ca21d3ca5e1d652fb3326af59a7161916]
	I0528 21:36:38.317692   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:38.323806   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:36:38.323865   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:36:38.373941   53940 cri.go:89] found id: "cd48d9a682f35098694aa404e88bada604d846500b0d8522b2ffdaac02372052"
	I0528 21:36:38.373954   53940 cri.go:89] found id: ""
	I0528 21:36:38.373963   53940 logs.go:276] 1 containers: [cd48d9a682f35098694aa404e88bada604d846500b0d8522b2ffdaac02372052]
	I0528 21:36:38.374020   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:38.378740   53940 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:36:38.378793   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:36:38.422288   53940 cri.go:89] found id: ""
	I0528 21:36:38.422304   53940 logs.go:276] 0 containers: []
	W0528 21:36:38.422312   53940 logs.go:278] No container was found matching "kindnet"
	I0528 21:36:38.422319   53940 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:36:38.422372   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:36:38.470985   53940 cri.go:89] found id: "92d9169ab6ccab7ee6b0266d51f9372a739795eecb87253521e7fb9aa8ef7788"
	I0528 21:36:38.470998   53940 cri.go:89] found id: ""
	I0528 21:36:38.471006   53940 logs.go:276] 1 containers: [92d9169ab6ccab7ee6b0266d51f9372a739795eecb87253521e7fb9aa8ef7788]
	I0528 21:36:38.471058   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:38.475447   53940 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:36:38.475461   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:36:38.816873   53940 logs.go:123] Gathering logs for kubelet ...
	I0528 21:36:38.816888   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:36:38.932860   53940 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:36:38.932879   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:36:39.002697   53940 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:36:39.002709   53940 logs.go:123] Gathering logs for kube-scheduler [0bf5138f27288ab0cbd4066c2b1a2e22117358f14e41a55a44f093965eeedd2b] ...
	I0528 21:36:39.002724   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0bf5138f27288ab0cbd4066c2b1a2e22117358f14e41a55a44f093965eeedd2b"
	I0528 21:36:39.078733   53940 logs.go:123] Gathering logs for kube-scheduler [07d7437db436b52ffc1890444d4c9171c35d237362b6896b4ec3807afd80136c] ...
	I0528 21:36:39.078750   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d7437db436b52ffc1890444d4c9171c35d237362b6896b4ec3807afd80136c"
	I0528 21:36:39.115869   53940 logs.go:123] Gathering logs for storage-provisioner [92d9169ab6ccab7ee6b0266d51f9372a739795eecb87253521e7fb9aa8ef7788] ...
	I0528 21:36:39.115886   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92d9169ab6ccab7ee6b0266d51f9372a739795eecb87253521e7fb9aa8ef7788"
	I0528 21:36:39.157060   53940 logs.go:123] Gathering logs for kube-controller-manager [cd48d9a682f35098694aa404e88bada604d846500b0d8522b2ffdaac02372052] ...
	I0528 21:36:39.157074   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd48d9a682f35098694aa404e88bada604d846500b0d8522b2ffdaac02372052"
	I0528 21:36:39.199299   53940 logs.go:123] Gathering logs for container status ...
	I0528 21:36:39.199311   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:36:35.999399   59176 machine.go:94] provisionDockerMachine start ...
	I0528 21:36:35.999420   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:36:35.999629   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:36:36.002754   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.003257   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:36:08 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:36:36.003289   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.003418   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:36:36.003703   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:36.004006   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:36.004177   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:36:36.004348   59176 main.go:141] libmachine: Using SSH client type: native
	I0528 21:36:36.004584   59176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0528 21:36:36.004600   59176 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:36:36.115081   59176 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-314578
	
	I0528 21:36:36.115120   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetMachineName
	I0528 21:36:36.115362   59176 buildroot.go:166] provisioning hostname "kubernetes-upgrade-314578"
	I0528 21:36:36.115391   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetMachineName
	I0528 21:36:36.115592   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:36:36.118870   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.119402   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:36:08 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:36:36.119526   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.119753   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:36:36.119962   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:36.120141   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:36.120420   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:36:36.120632   59176 main.go:141] libmachine: Using SSH client type: native
	I0528 21:36:36.120887   59176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0528 21:36:36.120919   59176 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-314578 && echo "kubernetes-upgrade-314578" | sudo tee /etc/hostname
	I0528 21:36:36.256597   59176 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-314578
	
	I0528 21:36:36.256629   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:36:36.259716   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.260189   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:36:08 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:36:36.260221   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.260402   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:36:36.260617   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:36.260802   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:36.260980   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:36:36.261180   59176 main.go:141] libmachine: Using SSH client type: native
	I0528 21:36:36.261346   59176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0528 21:36:36.261368   59176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-314578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-314578/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-314578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:36:36.371502   59176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:36:36.371532   59176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:36:36.371567   59176 buildroot.go:174] setting up certificates
	I0528 21:36:36.371579   59176 provision.go:84] configureAuth start
	I0528 21:36:36.371591   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetMachineName
	I0528 21:36:36.371896   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetIP
	I0528 21:36:36.375227   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.375653   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:36:08 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:36:36.375681   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.375836   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:36:36.378315   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.378717   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:36:08 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:36:36.378755   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.378918   59176 provision.go:143] copyHostCerts
	I0528 21:36:36.378985   59176 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:36:36.379003   59176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:36:36.379076   59176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:36:36.379226   59176 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:36:36.379242   59176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:36:36.379286   59176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:36:36.379387   59176 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:36:36.379400   59176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:36:36.379439   59176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:36:36.379530   59176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-314578 san=[127.0.0.1 192.168.39.174 kubernetes-upgrade-314578 localhost minikube]
	I0528 21:36:36.625679   59176 provision.go:177] copyRemoteCerts
	I0528 21:36:36.625735   59176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:36:36.625756   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:36:36.629002   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.629373   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:36:08 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:36:36.629408   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.629587   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:36:36.629808   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:36.630013   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:36:36.630169   59176 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/id_rsa Username:docker}
	I0528 21:36:36.717083   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:36:36.745463   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0528 21:36:36.777165   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 21:36:36.808695   59176 provision.go:87] duration metric: took 437.103319ms to configureAuth
	I0528 21:36:36.808719   59176 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:36:36.808929   59176 config.go:182] Loaded profile config "kubernetes-upgrade-314578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:36:36.809006   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:36:36.811584   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.811965   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:36:08 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:36:36.812011   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:36.812247   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:36:36.812465   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:36.812655   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:36.812833   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:36:36.813035   59176 main.go:141] libmachine: Using SSH client type: native
	I0528 21:36:36.813206   59176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0528 21:36:36.813226   59176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:36:37.798372   59176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:36:37.798402   59176 machine.go:97] duration metric: took 1.798988011s to provisionDockerMachine
	I0528 21:36:37.798428   59176 start.go:293] postStartSetup for "kubernetes-upgrade-314578" (driver="kvm2")
	I0528 21:36:37.798442   59176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:36:37.798462   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:36:37.798822   59176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:36:37.798858   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:36:37.801796   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:37.802203   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:36:08 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:36:37.802253   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:37.802329   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:36:37.802539   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:37.802712   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:36:37.802885   59176 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/id_rsa Username:docker}
	I0528 21:36:37.942256   59176 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:36:37.951898   59176 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 21:36:37.951926   59176 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:36:37.951982   59176 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:36:37.952054   59176 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:36:37.952137   59176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:36:37.977704   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:36:38.020670   59176 start.go:296] duration metric: took 222.226939ms for postStartSetup
	I0528 21:36:38.020733   59176 fix.go:56] duration metric: took 2.046364061s for fixHost
	I0528 21:36:38.020757   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:36:38.023784   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:38.024180   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:36:08 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:36:38.024209   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:38.024374   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:36:38.024564   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:38.024749   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:38.024934   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:36:38.025131   59176 main.go:141] libmachine: Using SSH client type: native
	I0528 21:36:38.025360   59176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0528 21:36:38.025390   59176 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 21:36:38.245577   59176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716932198.186645832
	
	I0528 21:36:38.245606   59176 fix.go:216] guest clock: 1716932198.186645832
	I0528 21:36:38.245616   59176 fix.go:229] Guest: 2024-05-28 21:36:38.186645832 +0000 UTC Remote: 2024-05-28 21:36:38.020738507 +0000 UTC m=+2.204950128 (delta=165.907325ms)
	I0528 21:36:38.245640   59176 fix.go:200] guest clock delta is within tolerance: 165.907325ms
	I0528 21:36:38.245647   59176 start.go:83] releasing machines lock for "kubernetes-upgrade-314578", held for 2.271315308s
	I0528 21:36:38.245672   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:36:38.245995   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetIP
	I0528 21:36:38.249815   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:38.250214   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:36:08 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:36:38.250294   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:38.250544   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:36:38.251245   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:36:38.251444   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .DriverName
	I0528 21:36:38.251523   59176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:36:38.251577   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:36:38.251641   59176 ssh_runner.go:195] Run: cat /version.json
	I0528 21:36:38.251665   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHHostname
	I0528 21:36:38.255708   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:38.256130   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:38.256721   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:36:08 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:36:38.256750   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:38.256847   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:36:08 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:36:38.256904   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:38.257388   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:36:38.257630   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:38.257634   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHPort
	I0528 21:36:38.257841   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHKeyPath
	I0528 21:36:38.257892   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:36:38.258042   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetSSHUsername
	I0528 21:36:38.258095   59176 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/id_rsa Username:docker}
	I0528 21:36:38.258469   59176 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/kubernetes-upgrade-314578/id_rsa Username:docker}
	I0528 21:36:38.435398   59176 ssh_runner.go:195] Run: systemctl --version
	I0528 21:36:38.468132   59176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:36:38.711215   59176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 21:36:38.717653   59176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:36:38.717715   59176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:36:38.730782   59176 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0528 21:36:38.730803   59176 start.go:494] detecting cgroup driver to use...
	I0528 21:36:38.730871   59176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:36:38.753627   59176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:36:38.770804   59176 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:36:38.770861   59176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:36:38.784815   59176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:36:38.799069   59176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:36:38.996928   59176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:36:39.172622   59176 docker.go:233] disabling docker service ...
	I0528 21:36:39.172693   59176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:36:39.194609   59176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:36:39.212520   59176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:36:39.426195   59176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:36:39.587208   59176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:36:39.601975   59176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:36:39.620454   59176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 21:36:39.620511   59176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:36:39.631719   59176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:36:39.631770   59176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:36:39.647826   59176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:36:39.662666   59176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:36:39.679026   59176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:36:39.693680   59176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:36:39.704314   59176 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:36:39.718479   59176 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:36:39.732658   59176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:36:39.743738   59176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:36:39.756118   59176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:36:39.922367   59176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:36:40.315055   59176 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:36:40.315146   59176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:36:40.321066   59176 start.go:562] Will wait 60s for crictl version
	I0528 21:36:40.321119   59176 ssh_runner.go:195] Run: which crictl
	I0528 21:36:40.325380   59176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:36:40.360853   59176 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 21:36:40.360925   59176 ssh_runner.go:195] Run: crio --version
	I0528 21:36:40.390417   59176 ssh_runner.go:195] Run: crio --version
	I0528 21:36:40.420774   59176 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 21:36:40.422003   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) Calling .GetIP
	I0528 21:36:40.424454   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:40.424818   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:04:a3", ip: ""} in network mk-kubernetes-upgrade-314578: {Iface:virbr1 ExpiryTime:2024-05-28 22:36:08 +0000 UTC Type:0 Mac:52:54:00:f7:04:a3 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:kubernetes-upgrade-314578 Clientid:01:52:54:00:f7:04:a3}
	I0528 21:36:40.424842   59176 main.go:141] libmachine: (kubernetes-upgrade-314578) DBG | domain kubernetes-upgrade-314578 has defined IP address 192.168.39.174 and MAC address 52:54:00:f7:04:a3 in network mk-kubernetes-upgrade-314578
	I0528 21:36:40.424997   59176 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 21:36:40.429187   59176 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-314578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:kubernetes-upgrade-314578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:36:40.429283   59176 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:36:40.429325   59176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:36:40.479137   59176 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:36:40.479158   59176 crio.go:433] Images already preloaded, skipping extraction
	I0528 21:36:40.479210   59176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:36:40.526294   59176 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:36:40.526328   59176 cache_images.go:84] Images are preloaded, skipping loading
	I0528 21:36:40.526350   59176 kubeadm.go:928] updating node { 192.168.39.174 8443 v1.30.1 crio true true} ...
	I0528 21:36:40.526513   59176 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-314578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-314578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:36:40.526597   59176 ssh_runner.go:195] Run: crio config
	I0528 21:36:40.575703   59176 cni.go:84] Creating CNI manager for ""
	I0528 21:36:40.575725   59176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:36:40.575736   59176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:36:40.575754   59176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-314578 NodeName:kubernetes-upgrade-314578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 21:36:40.575875   59176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-314578"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:36:40.575931   59176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 21:36:40.585821   59176 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:36:40.585879   59176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:36:40.595057   59176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0528 21:36:40.612566   59176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:36:40.630327   59176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0528 21:36:40.647784   59176 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0528 21:36:40.651870   59176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:36:40.777229   59176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:36:40.794887   59176 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578 for IP: 192.168.39.174
	I0528 21:36:40.794924   59176 certs.go:194] generating shared ca certs ...
	I0528 21:36:40.794948   59176 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:36:40.795159   59176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 21:36:40.795227   59176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 21:36:40.795242   59176 certs.go:256] generating profile certs ...
	I0528 21:36:40.795365   59176 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/client.key
	I0528 21:36:40.795417   59176 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.key.5d9c2211
	I0528 21:36:40.795477   59176 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/proxy-client.key
	I0528 21:36:40.795641   59176 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 21:36:40.795685   59176 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 21:36:40.795696   59176 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 21:36:40.795729   59176 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 21:36:40.795777   59176 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:36:40.795814   59176 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 21:36:40.795873   59176 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:36:40.796493   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:36:40.824645   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:36:40.848252   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:36:37.621345   58273 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-krslt" in "kube-system" namespace to be "Ready" ...
	I0528 21:36:39.628368   58273 pod_ready.go:102] pod "coredns-7db6d8ff4d-krslt" in "kube-system" namespace has status "Ready":"False"
	I0528 21:36:42.128541   58273 pod_ready.go:102] pod "coredns-7db6d8ff4d-krslt" in "kube-system" namespace has status "Ready":"False"
	I0528 21:36:39.246535   53940 logs.go:123] Gathering logs for dmesg ...
	I0528 21:36:39.246550   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:36:39.265070   53940 logs.go:123] Gathering logs for kube-apiserver [8be208bebdfe03545df7824da44f7afc62d5b7ab3cf1b0017e6ddeb8da35bd1d] ...
	I0528 21:36:39.265087   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be208bebdfe03545df7824da44f7afc62d5b7ab3cf1b0017e6ddeb8da35bd1d"
	I0528 21:36:39.324419   53940 logs.go:123] Gathering logs for etcd [f5c23d93df45849840c288924a91213e4118a16dca362e5de8c27a3f5dc2ea90] ...
	I0528 21:36:39.324438   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5c23d93df45849840c288924a91213e4118a16dca362e5de8c27a3f5dc2ea90"
	I0528 21:36:39.379289   53940 logs.go:123] Gathering logs for coredns [8d8a8c5eaac45dad52d6abbf32bc6bbe51b4145ba424b99c4329b21652154fd8] ...
	I0528 21:36:39.379307   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8a8c5eaac45dad52d6abbf32bc6bbe51b4145ba424b99c4329b21652154fd8"
	I0528 21:36:39.422486   53940 logs.go:123] Gathering logs for kube-proxy [6855d61e8d8513cb64be545367a1175ca21d3ca5e1d652fb3326af59a7161916] ...
	I0528 21:36:39.422505   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6855d61e8d8513cb64be545367a1175ca21d3ca5e1d652fb3326af59a7161916"
	I0528 21:36:41.969838   53940 api_server.go:253] Checking apiserver healthz at https://192.168.72.246:8443/healthz ...
	I0528 21:36:41.970373   53940 api_server.go:269] stopped: https://192.168.72.246:8443/healthz: Get "https://192.168.72.246:8443/healthz": dial tcp 192.168.72.246:8443: connect: connection refused
	I0528 21:36:41.970415   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:36:41.970463   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:36:42.012535   53940 cri.go:89] found id: "8be208bebdfe03545df7824da44f7afc62d5b7ab3cf1b0017e6ddeb8da35bd1d"
	I0528 21:36:42.012549   53940 cri.go:89] found id: ""
	I0528 21:36:42.012556   53940 logs.go:276] 1 containers: [8be208bebdfe03545df7824da44f7afc62d5b7ab3cf1b0017e6ddeb8da35bd1d]
	I0528 21:36:42.012599   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:42.017252   53940 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:36:42.017298   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:36:42.062397   53940 cri.go:89] found id: "f5c23d93df45849840c288924a91213e4118a16dca362e5de8c27a3f5dc2ea90"
	I0528 21:36:42.062410   53940 cri.go:89] found id: ""
	I0528 21:36:42.062417   53940 logs.go:276] 1 containers: [f5c23d93df45849840c288924a91213e4118a16dca362e5de8c27a3f5dc2ea90]
	I0528 21:36:42.062468   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:42.067943   53940 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:36:42.067988   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:36:42.113284   53940 cri.go:89] found id: "8d8a8c5eaac45dad52d6abbf32bc6bbe51b4145ba424b99c4329b21652154fd8"
	I0528 21:36:42.113298   53940 cri.go:89] found id: ""
	I0528 21:36:42.113305   53940 logs.go:276] 1 containers: [8d8a8c5eaac45dad52d6abbf32bc6bbe51b4145ba424b99c4329b21652154fd8]
	I0528 21:36:42.113362   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:42.117859   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:36:42.117906   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:36:42.158782   53940 cri.go:89] found id: "0bf5138f27288ab0cbd4066c2b1a2e22117358f14e41a55a44f093965eeedd2b"
	I0528 21:36:42.158795   53940 cri.go:89] found id: "07d7437db436b52ffc1890444d4c9171c35d237362b6896b4ec3807afd80136c"
	I0528 21:36:42.158798   53940 cri.go:89] found id: ""
	I0528 21:36:42.158806   53940 logs.go:276] 2 containers: [0bf5138f27288ab0cbd4066c2b1a2e22117358f14e41a55a44f093965eeedd2b 07d7437db436b52ffc1890444d4c9171c35d237362b6896b4ec3807afd80136c]
	I0528 21:36:42.158857   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:42.164838   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:42.169935   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:36:42.169985   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:36:42.217689   53940 cri.go:89] found id: "6855d61e8d8513cb64be545367a1175ca21d3ca5e1d652fb3326af59a7161916"
	I0528 21:36:42.217701   53940 cri.go:89] found id: ""
	I0528 21:36:42.217708   53940 logs.go:276] 1 containers: [6855d61e8d8513cb64be545367a1175ca21d3ca5e1d652fb3326af59a7161916]
	I0528 21:36:42.217776   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:42.227504   53940 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:36:42.227542   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:36:42.280486   53940 cri.go:89] found id: "cd48d9a682f35098694aa404e88bada604d846500b0d8522b2ffdaac02372052"
	I0528 21:36:42.280500   53940 cri.go:89] found id: ""
	I0528 21:36:42.280507   53940 logs.go:276] 1 containers: [cd48d9a682f35098694aa404e88bada604d846500b0d8522b2ffdaac02372052]
	I0528 21:36:42.280555   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:42.285418   53940 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:36:42.285462   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:36:42.323862   53940 cri.go:89] found id: ""
	I0528 21:36:42.323878   53940 logs.go:276] 0 containers: []
	W0528 21:36:42.323888   53940 logs.go:278] No container was found matching "kindnet"
	I0528 21:36:42.323894   53940 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:36:42.323952   53940 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:36:42.367335   53940 cri.go:89] found id: "92d9169ab6ccab7ee6b0266d51f9372a739795eecb87253521e7fb9aa8ef7788"
	I0528 21:36:42.367348   53940 cri.go:89] found id: ""
	I0528 21:36:42.367356   53940 logs.go:276] 1 containers: [92d9169ab6ccab7ee6b0266d51f9372a739795eecb87253521e7fb9aa8ef7788]
	I0528 21:36:42.367407   53940 ssh_runner.go:195] Run: which crictl
	I0528 21:36:42.373122   53940 logs.go:123] Gathering logs for container status ...
	I0528 21:36:42.373132   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:36:42.423503   53940 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:36:42.423518   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:36:42.500868   53940 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:36:42.500881   53940 logs.go:123] Gathering logs for kube-apiserver [8be208bebdfe03545df7824da44f7afc62d5b7ab3cf1b0017e6ddeb8da35bd1d] ...
	I0528 21:36:42.500893   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8be208bebdfe03545df7824da44f7afc62d5b7ab3cf1b0017e6ddeb8da35bd1d"
	I0528 21:36:42.551132   53940 logs.go:123] Gathering logs for etcd [f5c23d93df45849840c288924a91213e4118a16dca362e5de8c27a3f5dc2ea90] ...
	I0528 21:36:42.551147   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5c23d93df45849840c288924a91213e4118a16dca362e5de8c27a3f5dc2ea90"
	I0528 21:36:42.606872   53940 logs.go:123] Gathering logs for kube-scheduler [0bf5138f27288ab0cbd4066c2b1a2e22117358f14e41a55a44f093965eeedd2b] ...
	I0528 21:36:42.606893   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0bf5138f27288ab0cbd4066c2b1a2e22117358f14e41a55a44f093965eeedd2b"
	I0528 21:36:42.686623   53940 logs.go:123] Gathering logs for kube-proxy [6855d61e8d8513cb64be545367a1175ca21d3ca5e1d652fb3326af59a7161916] ...
	I0528 21:36:42.686640   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6855d61e8d8513cb64be545367a1175ca21d3ca5e1d652fb3326af59a7161916"
	I0528 21:36:42.723027   53940 logs.go:123] Gathering logs for kube-controller-manager [cd48d9a682f35098694aa404e88bada604d846500b0d8522b2ffdaac02372052] ...
	I0528 21:36:42.723042   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd48d9a682f35098694aa404e88bada604d846500b0d8522b2ffdaac02372052"
	I0528 21:36:42.760412   53940 logs.go:123] Gathering logs for storage-provisioner [92d9169ab6ccab7ee6b0266d51f9372a739795eecb87253521e7fb9aa8ef7788] ...
	I0528 21:36:42.760427   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92d9169ab6ccab7ee6b0266d51f9372a739795eecb87253521e7fb9aa8ef7788"
	I0528 21:36:42.805172   53940 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:36:42.805187   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:36:43.105585   53940 logs.go:123] Gathering logs for kubelet ...
	I0528 21:36:43.105610   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:36:43.216942   53940 logs.go:123] Gathering logs for dmesg ...
	I0528 21:36:43.216964   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:36:43.234647   53940 logs.go:123] Gathering logs for coredns [8d8a8c5eaac45dad52d6abbf32bc6bbe51b4145ba424b99c4329b21652154fd8] ...
	I0528 21:36:43.234671   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8a8c5eaac45dad52d6abbf32bc6bbe51b4145ba424b99c4329b21652154fd8"
	I0528 21:36:43.273845   53940 logs.go:123] Gathering logs for kube-scheduler [07d7437db436b52ffc1890444d4c9171c35d237362b6896b4ec3807afd80136c] ...
	I0528 21:36:43.273865   53940 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d7437db436b52ffc1890444d4c9171c35d237362b6896b4ec3807afd80136c"
	I0528 21:36:40.872861   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:36:40.926066   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0528 21:36:40.979750   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 21:36:41.013213   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:36:41.141125   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kubernetes-upgrade-314578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 21:36:41.208067   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:36:41.261623   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 21:36:41.292713   59176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 21:36:41.321712   59176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:36:41.342422   59176 ssh_runner.go:195] Run: openssl version
	I0528 21:36:41.348344   59176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:36:41.360066   59176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:36:41.368817   59176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:36:41.368877   59176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:36:41.374550   59176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:36:41.384663   59176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 21:36:41.395754   59176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 21:36:41.400371   59176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:36:41.400423   59176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 21:36:41.406471   59176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 21:36:41.417229   59176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 21:36:41.428758   59176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 21:36:41.433467   59176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:36:41.433521   59176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 21:36:41.439148   59176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 21:36:41.448993   59176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:36:41.454434   59176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 21:36:41.460573   59176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 21:36:41.466722   59176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 21:36:41.472518   59176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 21:36:41.478307   59176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 21:36:41.484341   59176 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 21:36:41.490513   59176 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-314578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.1 ClusterName:kubernetes-upgrade-314578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:36:41.490584   59176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:36:41.490632   59176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:36:41.527053   59176 cri.go:89] found id: "f5ed7d85c1ade0a358c7599897e2f52c5818326355f4f2de8d46cd1e24c8253a"
	I0528 21:36:41.527080   59176 cri.go:89] found id: "1320acdb9944cabf63cda25d501b84df4095aad48c044deeeb79b2df50aff0d8"
	I0528 21:36:41.527086   59176 cri.go:89] found id: "5a3538b3d65ec3c9c216e5144e06d6d082cc3e38362ef503b168f9f95979f154"
	I0528 21:36:41.527091   59176 cri.go:89] found id: "07c121fcb002f520b704f15b1c51600d2eb3913f45c64b25cd81d99cd3c92ed2"
	I0528 21:36:41.527095   59176 cri.go:89] found id: ""
	I0528 21:36:41.527141   59176 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.889971866Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716932211888748554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97f48b3c-da7c-43b8-b3dc-b78871ab481a name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.892706864Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec734e2f-9eef-4621-a48b-97524cade48a name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.892778674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec734e2f-9eef-4621-a48b-97524cade48a name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.892960122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c21c16bb914c58f5f45acfa2d9dd7b266606cf587154f24966f69d930a4004f6,PodSandboxId:6811f7fdf579cf1d0246d4dac1eb79844acb8327b2525b04045ff26a35475d8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716932204518961450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b33b6ff2928d85deb83f5578369db730,},Annotations:map[string]string{io.kubernetes.container.hash: ddd23525,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700794c8802284ac6f95852eaf8940f64cafc251c63be4e74af4d50fdbc31a05,PodSandboxId:ce3635565d50528ddb9ec85550379d05cf4196c05517f51b61d71968affb0d5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716932204494106139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c4608e35ff16f4589f20f8045d4ffe1,},Annotations:map[string]string{io.kubernetes.container.hash: ca11740c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752251e42432d90fd69de508b4f2ffe47779e9fe33a845ac1fab01f5c35ed0e0,PodSandboxId:a2eee2527ec5e912eccba4c94504450363005061ad81495465975040ac2ae1ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932204537328838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f7296e3ea4ccd5aa2b8f0d73a63560,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13f581f2860c03c68888393de14ea4dacbc537602629e819db8ff3d5adde3bd3,PodSandboxId:2ed1c0b8c55219423b420c9ce7f741c9bb1973243edeb31d1bd05bf8095244fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716932204479579569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93aad40d42519365b679e34933aa2378,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3538b3d65ec3c9c216e5144e06d6d082cc3e38362ef503b168f9f95979f154,PodSandboxId:17c013b7dddb55efa50be0049be184bb4dde309accbd9a2e7b8ec0b43ed3441a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716932198135891977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b33b6ff2928d85deb83f5578369db730,},Annotations:map[string]string{io.kubernetes.container.hash: ddd23525,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ed7d85c1ade0a358c7599897e2f52c5818326355f4f2de8d46cd1e24c8253a,PodSandboxId:5d8dd0c17778dbbf495227f2e554c5b05348a191f8f41bbdb30977e545f11995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716932198188887721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f7296e3ea4ccd5aa2b8f0d73a63560,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1320acdb9944cabf63cda25d501b84df4095aad48c044deeeb79b2df50aff0d8,PodSandboxId:7b44ec805842bc7395a6591658f152ef1825b512ab8a28285de9ccdd565c3124,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716932198142274635,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93aad40d42519365b679e34933aa2378,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c121fcb002f520b704f15b1c51600d2eb3913f45c64b25cd81d99cd3c92ed2,PodSandboxId:59d7cd221d77357607bfb660a45bfff8266993f69dbb1236bd578f13dddf5215,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716932198065829917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c4608e35ff16f4589f20f8045d4ffe1,},Annotations:map[string]string{io.kubernetes.container.hash: ca11740c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec734e2f-9eef-4621-a48b-97524cade48a name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.930982111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fcda7ae6-25f1-4578-83a8-0202ea4b6160 name=/runtime.v1.RuntimeService/Version
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.931091065Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fcda7ae6-25f1-4578-83a8-0202ea4b6160 name=/runtime.v1.RuntimeService/Version
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.932632932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52b574d1-c304-408a-b0a4-67ef8498f823 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.934086610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716932211934049567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52b574d1-c304-408a-b0a4-67ef8498f823 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.935377863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5dcc3b23-200a-4029-a8ff-42ad53e71909 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.935550771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5dcc3b23-200a-4029-a8ff-42ad53e71909 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.935836359Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c21c16bb914c58f5f45acfa2d9dd7b266606cf587154f24966f69d930a4004f6,PodSandboxId:6811f7fdf579cf1d0246d4dac1eb79844acb8327b2525b04045ff26a35475d8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716932204518961450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b33b6ff2928d85deb83f5578369db730,},Annotations:map[string]string{io.kubernetes.container.hash: ddd23525,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700794c8802284ac6f95852eaf8940f64cafc251c63be4e74af4d50fdbc31a05,PodSandboxId:ce3635565d50528ddb9ec85550379d05cf4196c05517f51b61d71968affb0d5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716932204494106139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c4608e35ff16f4589f20f8045d4ffe1,},Annotations:map[string]string{io.kubernetes.container.hash: ca11740c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752251e42432d90fd69de508b4f2ffe47779e9fe33a845ac1fab01f5c35ed0e0,PodSandboxId:a2eee2527ec5e912eccba4c94504450363005061ad81495465975040ac2ae1ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932204537328838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f7296e3ea4ccd5aa2b8f0d73a63560,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13f581f2860c03c68888393de14ea4dacbc537602629e819db8ff3d5adde3bd3,PodSandboxId:2ed1c0b8c55219423b420c9ce7f741c9bb1973243edeb31d1bd05bf8095244fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716932204479579569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93aad40d42519365b679e34933aa2378,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3538b3d65ec3c9c216e5144e06d6d082cc3e38362ef503b168f9f95979f154,PodSandboxId:17c013b7dddb55efa50be0049be184bb4dde309accbd9a2e7b8ec0b43ed3441a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716932198135891977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b33b6ff2928d85deb83f5578369db730,},Annotations:map[string]string{io.kubernetes.container.hash: ddd23525,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ed7d85c1ade0a358c7599897e2f52c5818326355f4f2de8d46cd1e24c8253a,PodSandboxId:5d8dd0c17778dbbf495227f2e554c5b05348a191f8f41bbdb30977e545f11995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716932198188887721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f7296e3ea4ccd5aa2b8f0d73a63560,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1320acdb9944cabf63cda25d501b84df4095aad48c044deeeb79b2df50aff0d8,PodSandboxId:7b44ec805842bc7395a6591658f152ef1825b512ab8a28285de9ccdd565c3124,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716932198142274635,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93aad40d42519365b679e34933aa2378,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c121fcb002f520b704f15b1c51600d2eb3913f45c64b25cd81d99cd3c92ed2,PodSandboxId:59d7cd221d77357607bfb660a45bfff8266993f69dbb1236bd578f13dddf5215,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716932198065829917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c4608e35ff16f4589f20f8045d4ffe1,},Annotations:map[string]string{io.kubernetes.container.hash: ca11740c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5dcc3b23-200a-4029-a8ff-42ad53e71909 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.985807372Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75460c7b-ac37-4f5d-9cce-87c8d3bc93de name=/runtime.v1.RuntimeService/Version
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.985915687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75460c7b-ac37-4f5d-9cce-87c8d3bc93de name=/runtime.v1.RuntimeService/Version
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.987124791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8e686c8-3e98-4eef-a1a8-04ce89b7d8ba name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.987663342Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716932211987637597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8e686c8-3e98-4eef-a1a8-04ce89b7d8ba name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.988154983Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=527f36e7-8dc0-4c36-955f-0fd8c74b8c0c name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.988227856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=527f36e7-8dc0-4c36-955f-0fd8c74b8c0c name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:36:51 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:51.988483474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c21c16bb914c58f5f45acfa2d9dd7b266606cf587154f24966f69d930a4004f6,PodSandboxId:6811f7fdf579cf1d0246d4dac1eb79844acb8327b2525b04045ff26a35475d8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716932204518961450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b33b6ff2928d85deb83f5578369db730,},Annotations:map[string]string{io.kubernetes.container.hash: ddd23525,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700794c8802284ac6f95852eaf8940f64cafc251c63be4e74af4d50fdbc31a05,PodSandboxId:ce3635565d50528ddb9ec85550379d05cf4196c05517f51b61d71968affb0d5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716932204494106139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c4608e35ff16f4589f20f8045d4ffe1,},Annotations:map[string]string{io.kubernetes.container.hash: ca11740c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752251e42432d90fd69de508b4f2ffe47779e9fe33a845ac1fab01f5c35ed0e0,PodSandboxId:a2eee2527ec5e912eccba4c94504450363005061ad81495465975040ac2ae1ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932204537328838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f7296e3ea4ccd5aa2b8f0d73a63560,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13f581f2860c03c68888393de14ea4dacbc537602629e819db8ff3d5adde3bd3,PodSandboxId:2ed1c0b8c55219423b420c9ce7f741c9bb1973243edeb31d1bd05bf8095244fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716932204479579569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93aad40d42519365b679e34933aa2378,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3538b3d65ec3c9c216e5144e06d6d082cc3e38362ef503b168f9f95979f154,PodSandboxId:17c013b7dddb55efa50be0049be184bb4dde309accbd9a2e7b8ec0b43ed3441a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716932198135891977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b33b6ff2928d85deb83f5578369db730,},Annotations:map[string]string{io.kubernetes.container.hash: ddd23525,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ed7d85c1ade0a358c7599897e2f52c5818326355f4f2de8d46cd1e24c8253a,PodSandboxId:5d8dd0c17778dbbf495227f2e554c5b05348a191f8f41bbdb30977e545f11995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716932198188887721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f7296e3ea4ccd5aa2b8f0d73a63560,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1320acdb9944cabf63cda25d501b84df4095aad48c044deeeb79b2df50aff0d8,PodSandboxId:7b44ec805842bc7395a6591658f152ef1825b512ab8a28285de9ccdd565c3124,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716932198142274635,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93aad40d42519365b679e34933aa2378,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c121fcb002f520b704f15b1c51600d2eb3913f45c64b25cd81d99cd3c92ed2,PodSandboxId:59d7cd221d77357607bfb660a45bfff8266993f69dbb1236bd578f13dddf5215,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716932198065829917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c4608e35ff16f4589f20f8045d4ffe1,},Annotations:map[string]string{io.kubernetes.container.hash: ca11740c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=527f36e7-8dc0-4c36-955f-0fd8c74b8c0c name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:36:52 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:52.024071975Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=512ef0ff-12e3-45da-8e9c-ffcdbb341f27 name=/runtime.v1.RuntimeService/Version
	May 28 21:36:52 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:52.024163216Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=512ef0ff-12e3-45da-8e9c-ffcdbb341f27 name=/runtime.v1.RuntimeService/Version
	May 28 21:36:52 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:52.025912102Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=325ed62a-7924-4ca6-a131-481ae226f9cf name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:36:52 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:52.026646052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716932212026620484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=325ed62a-7924-4ca6-a131-481ae226f9cf name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:36:52 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:52.027299513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0fe3180-c93f-4952-a3d3-08bba9371a12 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:36:52 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:52.027372525Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0fe3180-c93f-4952-a3d3-08bba9371a12 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:36:52 kubernetes-upgrade-314578 crio[1884]: time="2024-05-28 21:36:52.027758770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c21c16bb914c58f5f45acfa2d9dd7b266606cf587154f24966f69d930a4004f6,PodSandboxId:6811f7fdf579cf1d0246d4dac1eb79844acb8327b2525b04045ff26a35475d8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716932204518961450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b33b6ff2928d85deb83f5578369db730,},Annotations:map[string]string{io.kubernetes.container.hash: ddd23525,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:700794c8802284ac6f95852eaf8940f64cafc251c63be4e74af4d50fdbc31a05,PodSandboxId:ce3635565d50528ddb9ec85550379d05cf4196c05517f51b61d71968affb0d5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716932204494106139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c4608e35ff16f4589f20f8045d4ffe1,},Annotations:map[string]string{io.kubernetes.container.hash: ca11740c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752251e42432d90fd69de508b4f2ffe47779e9fe33a845ac1fab01f5c35ed0e0,PodSandboxId:a2eee2527ec5e912eccba4c94504450363005061ad81495465975040ac2ae1ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932204537328838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f7296e3ea4ccd5aa2b8f0d73a63560,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13f581f2860c03c68888393de14ea4dacbc537602629e819db8ff3d5adde3bd3,PodSandboxId:2ed1c0b8c55219423b420c9ce7f741c9bb1973243edeb31d1bd05bf8095244fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716932204479579569,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93aad40d42519365b679e34933aa2378,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a3538b3d65ec3c9c216e5144e06d6d082cc3e38362ef503b168f9f95979f154,PodSandboxId:17c013b7dddb55efa50be0049be184bb4dde309accbd9a2e7b8ec0b43ed3441a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716932198135891977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b33b6ff2928d85deb83f5578369db730,},Annotations:map[string]string{io.kubernetes.container.hash: ddd23525,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ed7d85c1ade0a358c7599897e2f52c5818326355f4f2de8d46cd1e24c8253a,PodSandboxId:5d8dd0c17778dbbf495227f2e554c5b05348a191f8f41bbdb30977e545f11995,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716932198188887721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8f7296e3ea4ccd5aa2b8f0d73a63560,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1320acdb9944cabf63cda25d501b84df4095aad48c044deeeb79b2df50aff0d8,PodSandboxId:7b44ec805842bc7395a6591658f152ef1825b512ab8a28285de9ccdd565c3124,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716932198142274635,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93aad40d42519365b679e34933aa2378,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c121fcb002f520b704f15b1c51600d2eb3913f45c64b25cd81d99cd3c92ed2,PodSandboxId:59d7cd221d77357607bfb660a45bfff8266993f69dbb1236bd578f13dddf5215,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716932198065829917,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-314578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c4608e35ff16f4589f20f8045d4ffe1,},Annotations:map[string]string{io.kubernetes.container.hash: ca11740c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0fe3180-c93f-4952-a3d3-08bba9371a12 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	752251e42432d       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   7 seconds ago       Running             kube-scheduler            2                   a2eee2527ec5e       kube-scheduler-kubernetes-upgrade-314578
	c21c16bb914c5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 seconds ago       Running             etcd                      2                   6811f7fdf579c       etcd-kubernetes-upgrade-314578
	700794c880228       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   7 seconds ago       Running             kube-apiserver            2                   ce3635565d505       kube-apiserver-kubernetes-upgrade-314578
	13f581f2860c0       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   7 seconds ago       Running             kube-controller-manager   2                   2ed1c0b8c5521       kube-controller-manager-kubernetes-upgrade-314578
	f5ed7d85c1ade       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   13 seconds ago      Exited              kube-scheduler            1                   5d8dd0c17778d       kube-scheduler-kubernetes-upgrade-314578
	1320acdb9944c       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   13 seconds ago      Exited              kube-controller-manager   1                   7b44ec805842b       kube-controller-manager-kubernetes-upgrade-314578
	5a3538b3d65ec       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   13 seconds ago      Exited              etcd                      1                   17c013b7dddb5       etcd-kubernetes-upgrade-314578
	07c121fcb002f       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   14 seconds ago      Exited              kube-apiserver            1                   59d7cd221d773       kube-apiserver-kubernetes-upgrade-314578
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-314578
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-314578
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:36:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-314578
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:36:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:36:48 +0000   Tue, 28 May 2024 21:36:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:36:48 +0000   Tue, 28 May 2024 21:36:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:36:48 +0000   Tue, 28 May 2024 21:36:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:36:48 +0000   Tue, 28 May 2024 21:36:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    kubernetes-upgrade-314578
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eebace2fd6db4c0cb89bb14bfda5d7f9
	  System UUID:                eebace2f-d6db-4c0c-b89b-b14bfda5d7f9
	  Boot ID:                    aef034e2-6c66-4ae2-b29b-59f0c55b7453
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 kube-apiserver-kubernetes-upgrade-314578             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-314578    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  kube-system                 kube-scheduler-kubernetes-upgrade-314578             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                550m (27%!)(MISSING)  0 (0%!)(MISSING)
	  memory             0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet  Node kubernetes-upgrade-314578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet  Node kubernetes-upgrade-314578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet  Node kubernetes-upgrade-314578 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 9s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-314578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-314578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet  Node kubernetes-upgrade-314578 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +2.826324] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.748142] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.495819] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.058437] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069371] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.225959] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.154161] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.328242] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +4.662527] systemd-fstab-generator[738]: Ignoring "noauto" option for root device
	[  +0.104176] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.672446] systemd-fstab-generator[862]: Ignoring "noauto" option for root device
	[  +9.685380] systemd-fstab-generator[1254]: Ignoring "noauto" option for root device
	[  +0.091226] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.089745] systemd-fstab-generator[1805]: Ignoring "noauto" option for root device
	[  +0.179112] systemd-fstab-generator[1817]: Ignoring "noauto" option for root device
	[  +0.203184] systemd-fstab-generator[1832]: Ignoring "noauto" option for root device
	[  +0.231189] systemd-fstab-generator[1843]: Ignoring "noauto" option for root device
	[  +0.318224] systemd-fstab-generator[1871]: Ignoring "noauto" option for root device
	[  +0.091908] kauditd_printk_skb: 149 callbacks suppressed
	[  +0.787453] systemd-fstab-generator[2063]: Ignoring "noauto" option for root device
	[  +3.054165] systemd-fstab-generator[2334]: Ignoring "noauto" option for root device
	[  +6.458693] systemd-fstab-generator[2602]: Ignoring "noauto" option for root device
	[  +0.085595] kauditd_printk_skb: 104 callbacks suppressed
	
	
	==> etcd [5a3538b3d65ec3c9c216e5144e06d6d082cc3e38362ef503b168f9f95979f154] <==
	{"level":"info","ts":"2024-05-28T21:36:38.639205Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"11.303449ms"}
	{"level":"info","ts":"2024-05-28T21:36:38.641086Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-05-28T21:36:38.645708Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","commit-index":300}
	{"level":"info","ts":"2024-05-28T21:36:38.647171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 switched to configuration voters=()"}
	{"level":"info","ts":"2024-05-28T21:36:38.647238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became follower at term 2"}
	{"level":"info","ts":"2024-05-28T21:36:38.647281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 72f328261b8d7407 [peers: [], term: 2, commit: 300, applied: 0, lastindex: 300, lastterm: 2]"}
	{"level":"warn","ts":"2024-05-28T21:36:38.649586Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-05-28T21:36:38.656781Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":294}
	{"level":"info","ts":"2024-05-28T21:36:38.665195Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-05-28T21:36:38.674783Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"72f328261b8d7407","timeout":"7s"}
	{"level":"info","ts":"2024-05-28T21:36:38.675644Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"72f328261b8d7407"}
	{"level":"info","ts":"2024-05-28T21:36:38.680668Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"72f328261b8d7407","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-05-28T21:36:38.681901Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-28T21:36:38.706359Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T21:36:38.706507Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T21:36:38.706529Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T21:36:38.706821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 switched to configuration voters=(8283008283800597511)"}
	{"level":"info","ts":"2024-05-28T21:36:38.706935Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","added-peer-id":"72f328261b8d7407","added-peer-peer-urls":["https://192.168.39.174:2380"]}
	{"level":"info","ts":"2024-05-28T21:36:38.70706Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:36:38.707111Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:36:38.711841Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-28T21:36:38.712019Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"72f328261b8d7407","initial-advertise-peer-urls":["https://192.168.39.174:2380"],"listen-peer-urls":["https://192.168.39.174:2380"],"advertise-client-urls":["https://192.168.39.174:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.174:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T21:36:38.712046Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T21:36:38.712107Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-05-28T21:36:38.712113Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.174:2380"}
	
	
	==> etcd [c21c16bb914c58f5f45acfa2d9dd7b266606cf587154f24966f69d930a4004f6] <==
	{"level":"info","ts":"2024-05-28T21:36:44.915821Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T21:36:44.91583Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T21:36:44.916005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 switched to configuration voters=(8283008283800597511)"}
	{"level":"info","ts":"2024-05-28T21:36:44.916088Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","added-peer-id":"72f328261b8d7407","added-peer-peer-urls":["https://192.168.39.174:2380"]}
	{"level":"info","ts":"2024-05-28T21:36:44.916202Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:36:44.91624Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:36:44.94492Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-28T21:36:44.945101Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-05-28T21:36:44.945172Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-05-28T21:36:44.945169Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T21:36:44.945115Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"72f328261b8d7407","initial-advertise-peer-urls":["https://192.168.39.174:2380"],"listen-peer-urls":["https://192.168.39.174:2380"],"advertise-client-urls":["https://192.168.39.174:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.174:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T21:36:46.577861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-28T21:36:46.577906Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-28T21:36:46.577997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgPreVoteResp from 72f328261b8d7407 at term 2"}
	{"level":"info","ts":"2024-05-28T21:36:46.578019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became candidate at term 3"}
	{"level":"info","ts":"2024-05-28T21:36:46.578028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgVoteResp from 72f328261b8d7407 at term 3"}
	{"level":"info","ts":"2024-05-28T21:36:46.578042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became leader at term 3"}
	{"level":"info","ts":"2024-05-28T21:36:46.578076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 72f328261b8d7407 elected leader 72f328261b8d7407 at term 3"}
	{"level":"info","ts":"2024-05-28T21:36:46.58414Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:36:46.584488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:36:46.584929Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T21:36:46.584992Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T21:36:46.584156Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"72f328261b8d7407","local-member-attributes":"{Name:kubernetes-upgrade-314578 ClientURLs:[https://192.168.39.174:2379]}","request-path":"/0/members/72f328261b8d7407/attributes","cluster-id":"3f65b9220f75d9a5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T21:36:46.586956Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.174:2379"}
	{"level":"info","ts":"2024-05-28T21:36:46.587358Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:36:52 up 0 min,  0 users,  load average: 1.77, 0.44, 0.15
	Linux kubernetes-upgrade-314578 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [07c121fcb002f520b704f15b1c51600d2eb3913f45c64b25cd81d99cd3c92ed2] <==
	I0528 21:36:38.587210       1 options.go:221] external host was not specified, using 192.168.39.174
	I0528 21:36:38.588201       1 server.go:148] Version: v1.30.1
	I0528 21:36:38.588229       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:36:39.629219       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0528 21:36:39.630078       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0528 21:36:39.630537       1 instance.go:299] Using reconciler: lease
	I0528 21:36:39.630975       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 21:36:39.631277       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0528 21:36:39.914638       1 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:34830->127.0.0.1:2379: read: connection reset by peer"
	W0528 21:36:39.915853       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:34824->127.0.0.1:2379: read: connection reset by peer"
	
	
	==> kube-apiserver [700794c8802284ac6f95852eaf8940f64cafc251c63be4e74af4d50fdbc31a05] <==
	I0528 21:36:47.988751       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0528 21:36:47.988843       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0528 21:36:48.036496       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0528 21:36:48.046556       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 21:36:48.046592       1 policy_source.go:224] refreshing policies
	I0528 21:36:48.055715       1 shared_informer.go:320] Caches are synced for configmaps
	I0528 21:36:48.056120       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0528 21:36:48.056202       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0528 21:36:48.056804       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0528 21:36:48.056863       1 aggregator.go:165] initial CRD sync complete...
	I0528 21:36:48.056888       1 autoregister_controller.go:141] Starting autoregister controller
	I0528 21:36:48.056910       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0528 21:36:48.056931       1 cache.go:39] Caches are synced for autoregister controller
	I0528 21:36:48.061755       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0528 21:36:48.062148       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0528 21:36:48.062204       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0528 21:36:48.062484       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0528 21:36:48.068181       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0528 21:36:48.087071       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0528 21:36:48.968556       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0528 21:36:49.875573       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0528 21:36:49.892703       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0528 21:36:49.930793       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0528 21:36:50.038002       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0528 21:36:50.053814       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [1320acdb9944cabf63cda25d501b84df4095aad48c044deeeb79b2df50aff0d8] <==
	I0528 21:36:39.835269       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [13f581f2860c03c68888393de14ea4dacbc537602629e819db8ff3d5adde3bd3] <==
	I0528 21:36:50.210768       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0528 21:36:50.210777       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0528 21:36:50.212612       1 controllermanager.go:761] "Started controller" controller="statefulset-controller"
	I0528 21:36:50.212797       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0528 21:36:50.212805       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	E0528 21:36:50.215107       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0528 21:36:50.215127       1 controllermanager.go:739] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0528 21:36:50.271301       1 controllermanager.go:761] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0528 21:36:50.271478       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0528 21:36:50.271824       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0528 21:36:50.274916       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0528 21:36:50.275455       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0528 21:36:50.275621       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0528 21:36:50.312100       1 controllermanager.go:761] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0528 21:36:50.312119       1 controllermanager.go:739] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0528 21:36:50.312231       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0528 21:36:50.312239       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0528 21:36:50.358260       1 controllermanager.go:761] "Started controller" controller="deployment-controller"
	I0528 21:36:50.358516       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0528 21:36:50.358677       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0528 21:36:50.409751       1 controllermanager.go:761] "Started controller" controller="pod-garbage-collector-controller"
	I0528 21:36:50.410044       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0528 21:36:50.410314       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0528 21:36:50.457767       1 controllermanager.go:761] "Started controller" controller="bootstrap-signer-controller"
	I0528 21:36:50.457822       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	
	
	==> kube-scheduler [752251e42432d90fd69de508b4f2ffe47779e9fe33a845ac1fab01f5c35ed0e0] <==
	I0528 21:36:45.806779       1 serving.go:380] Generated self-signed cert in-memory
	W0528 21:36:48.005558       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 21:36:48.005667       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W0528 21:36:48.005703       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 21:36:48.005766       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 21:36:48.032753       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 21:36:48.032891       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:36:48.034342       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0528 21:36:48.034584       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0528 21:36:48.034623       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 21:36:48.034686       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 21:36:48.135349       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f5ed7d85c1ade0a358c7599897e2f52c5818326355f4f2de8d46cd1e24c8253a] <==
	
	
	==> kubelet <==
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:44.175522    2341 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c4608e35ff16f4589f20f8045d4ffe1-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-314578\" (UID: \"8c4608e35ff16f4589f20f8045d4ffe1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-314578"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:44.175557    2341 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c4608e35ff16f4589f20f8045d4ffe1-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-314578\" (UID: \"8c4608e35ff16f4589f20f8045d4ffe1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-314578"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:44.175597    2341 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/93aad40d42519365b679e34933aa2378-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-314578\" (UID: \"93aad40d42519365b679e34933aa2378\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-314578"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:44.175630    2341 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93aad40d42519365b679e34933aa2378-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-314578\" (UID: \"93aad40d42519365b679e34933aa2378\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-314578"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:44.175655    2341 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8f7296e3ea4ccd5aa2b8f0d73a63560-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-314578\" (UID: \"d8f7296e3ea4ccd5aa2b8f0d73a63560\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-314578"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:44.175684    2341 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/b33b6ff2928d85deb83f5578369db730-etcd-certs\") pod \"etcd-kubernetes-upgrade-314578\" (UID: \"b33b6ff2928d85deb83f5578369db730\") " pod="kube-system/etcd-kubernetes-upgrade-314578"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: E0528 21:36:44.178901    2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-314578?timeout=10s\": dial tcp 192.168.39.174:8443: connect: connection refused" interval="400ms"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:44.277853    2341 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-314578"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: E0528 21:36:44.278849    2341 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.174:8443: connect: connection refused" node="kubernetes-upgrade-314578"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:44.461176    2341 scope.go:117] "RemoveContainer" containerID="f5ed7d85c1ade0a358c7599897e2f52c5818326355f4f2de8d46cd1e24c8253a"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:44.461570    2341 scope.go:117] "RemoveContainer" containerID="1320acdb9944cabf63cda25d501b84df4095aad48c044deeeb79b2df50aff0d8"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:44.462651    2341 scope.go:117] "RemoveContainer" containerID="5a3538b3d65ec3c9c216e5144e06d6d082cc3e38362ef503b168f9f95979f154"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:44.463084    2341 scope.go:117] "RemoveContainer" containerID="07c121fcb002f520b704f15b1c51600d2eb3913f45c64b25cd81d99cd3c92ed2"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: E0528 21:36:44.585788    2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-314578?timeout=10s\": dial tcp 192.168.39.174:8443: connect: connection refused" interval="800ms"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:44.680563    2341 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-314578"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: E0528 21:36:44.683364    2341 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.174:8443: connect: connection refused" node="kubernetes-upgrade-314578"
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: W0528 21:36:44.762802    2341 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-314578&limit=500&resourceVersion=0": dial tcp 192.168.39.174:8443: connect: connection refused
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: E0528 21:36:44.762883    2341 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-314578&limit=500&resourceVersion=0": dial tcp 192.168.39.174:8443: connect: connection refused
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: W0528 21:36:44.791452    2341 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.174:8443: connect: connection refused
	May 28 21:36:44 kubernetes-upgrade-314578 kubelet[2341]: E0528 21:36:44.791585    2341 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.174:8443: connect: connection refused
	May 28 21:36:45 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:45.490999    2341 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-314578"
	May 28 21:36:48 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:48.113323    2341 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-314578"
	May 28 21:36:48 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:48.114075    2341 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-314578"
	May 28 21:36:48 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:48.961075    2341 apiserver.go:52] "Watching apiserver"
	May 28 21:36:48 kubernetes-upgrade-314578 kubelet[2341]: I0528 21:36:48.974310    2341 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:36:51.562296   59401 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18966-3963/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-314578 -n kubernetes-upgrade-314578
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-314578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-apiserver-kubernetes-upgrade-314578 storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-314578 describe pod kube-apiserver-kubernetes-upgrade-314578 storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-314578 describe pod kube-apiserver-kubernetes-upgrade-314578 storage-provisioner: exit status 1 (71.075468ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-apiserver-kubernetes-upgrade-314578" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-314578 describe pod kube-apiserver-kubernetes-upgrade-314578 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-314578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-314578
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-314578: (1.192624245s)
--- FAIL: TestKubernetesUpgrade (387.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (63.32s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-547166 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0528 21:32:20.497951   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-547166 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.487561223s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-547166] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-547166" primary control-plane node in "pause-547166" cluster
	* Updating the running kvm2 "pause-547166" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-547166" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:32:00.747653   53591 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:32:00.747814   53591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:32:00.747827   53591 out.go:304] Setting ErrFile to fd 2...
	I0528 21:32:00.747835   53591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:32:00.748180   53591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:32:00.748835   53591 out.go:298] Setting JSON to false
	I0528 21:32:00.750143   53591 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4464,"bootTime":1716927457,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:32:00.750213   53591 start.go:139] virtualization: kvm guest
	I0528 21:32:00.751989   53591 out.go:177] * [pause-547166] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:32:00.753538   53591 notify.go:220] Checking for updates...
	I0528 21:32:00.753543   53591 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:32:00.754999   53591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:32:00.756171   53591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:32:00.757322   53591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:32:00.758597   53591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:32:00.759760   53591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:32:00.761455   53591 config.go:182] Loaded profile config "pause-547166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:32:00.762065   53591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:32:00.762127   53591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:32:00.778117   53591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35421
	I0528 21:32:00.778587   53591 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:32:00.779333   53591 main.go:141] libmachine: Using API Version  1
	I0528 21:32:00.779358   53591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:32:00.779816   53591 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:32:00.780016   53591 main.go:141] libmachine: (pause-547166) Calling .DriverName
	I0528 21:32:00.780297   53591 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:32:00.780592   53591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:32:00.780632   53591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:32:00.794454   53591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34041
	I0528 21:32:00.794850   53591 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:32:00.795337   53591 main.go:141] libmachine: Using API Version  1
	I0528 21:32:00.795364   53591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:32:00.795708   53591 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:32:00.795904   53591 main.go:141] libmachine: (pause-547166) Calling .DriverName
	I0528 21:32:00.831731   53591 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:32:00.833314   53591 start.go:297] selected driver: kvm2
	I0528 21:32:00.833328   53591 start.go:901] validating driver "kvm2" against &{Name:pause-547166 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.1 ClusterName:pause-547166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:32:00.833472   53591 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:32:00.833812   53591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:32:00.833876   53591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:32:00.851301   53591 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:32:00.851931   53591 cni.go:84] Creating CNI manager for ""
	I0528 21:32:00.851944   53591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:32:00.852002   53591 start.go:340] cluster config:
	{Name:pause-547166 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:pause-547166 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:32:00.852125   53591 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:32:00.853960   53591 out.go:177] * Starting "pause-547166" primary control-plane node in "pause-547166" cluster
	I0528 21:32:00.855349   53591 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:32:00.855381   53591 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 21:32:00.855388   53591 cache.go:56] Caching tarball of preloaded images
	I0528 21:32:00.855456   53591 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:32:00.855466   53591 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 21:32:00.855576   53591 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/pause-547166/config.json ...
	I0528 21:32:00.855741   53591 start.go:360] acquireMachinesLock for pause-547166: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:32:02.254693   53591 start.go:364] duration metric: took 1.398915816s to acquireMachinesLock for "pause-547166"
	I0528 21:32:02.254771   53591 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:32:02.254789   53591 fix.go:54] fixHost starting: 
	I0528 21:32:02.255182   53591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:32:02.255254   53591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:32:02.274300   53591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36093
	I0528 21:32:02.274963   53591 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:32:02.275492   53591 main.go:141] libmachine: Using API Version  1
	I0528 21:32:02.275520   53591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:32:02.275857   53591 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:32:02.276088   53591 main.go:141] libmachine: (pause-547166) Calling .DriverName
	I0528 21:32:02.276307   53591 main.go:141] libmachine: (pause-547166) Calling .GetState
	I0528 21:32:02.278271   53591 fix.go:112] recreateIfNeeded on pause-547166: state=Running err=<nil>
	W0528 21:32:02.278292   53591 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:32:02.280346   53591 out.go:177] * Updating the running kvm2 "pause-547166" VM ...
	I0528 21:32:02.282204   53591 machine.go:94] provisionDockerMachine start ...
	I0528 21:32:02.282222   53591 main.go:141] libmachine: (pause-547166) Calling .DriverName
	I0528 21:32:02.282416   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHHostname
	I0528 21:32:02.285131   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.285571   53591 main.go:141] libmachine: (pause-547166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e9:57", ip: ""} in network mk-pause-547166: {Iface:virbr2 ExpiryTime:2024-05-28 22:30:39 +0000 UTC Type:0 Mac:52:54:00:de:e9:57 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:pause-547166 Clientid:01:52:54:00:de:e9:57}
	I0528 21:32:02.285597   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined IP address 192.168.50.108 and MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.285803   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHPort
	I0528 21:32:02.286007   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:02.286156   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:02.286309   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHUsername
	I0528 21:32:02.286471   53591 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:02.286697   53591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0528 21:32:02.286711   53591 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:32:02.406487   53591 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-547166
	
	I0528 21:32:02.406522   53591 main.go:141] libmachine: (pause-547166) Calling .GetMachineName
	I0528 21:32:02.406756   53591 buildroot.go:166] provisioning hostname "pause-547166"
	I0528 21:32:02.406779   53591 main.go:141] libmachine: (pause-547166) Calling .GetMachineName
	I0528 21:32:02.406978   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHHostname
	I0528 21:32:02.409701   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.410098   53591 main.go:141] libmachine: (pause-547166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e9:57", ip: ""} in network mk-pause-547166: {Iface:virbr2 ExpiryTime:2024-05-28 22:30:39 +0000 UTC Type:0 Mac:52:54:00:de:e9:57 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:pause-547166 Clientid:01:52:54:00:de:e9:57}
	I0528 21:32:02.410137   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined IP address 192.168.50.108 and MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.410270   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHPort
	I0528 21:32:02.410449   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:02.410617   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:02.410751   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHUsername
	I0528 21:32:02.410974   53591 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:02.411139   53591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0528 21:32:02.411152   53591 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-547166 && echo "pause-547166" | sudo tee /etc/hostname
	I0528 21:32:02.543262   53591 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-547166
	
	I0528 21:32:02.543290   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHHostname
	I0528 21:32:02.546039   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.546364   53591 main.go:141] libmachine: (pause-547166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e9:57", ip: ""} in network mk-pause-547166: {Iface:virbr2 ExpiryTime:2024-05-28 22:30:39 +0000 UTC Type:0 Mac:52:54:00:de:e9:57 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:pause-547166 Clientid:01:52:54:00:de:e9:57}
	I0528 21:32:02.546389   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined IP address 192.168.50.108 and MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.546579   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHPort
	I0528 21:32:02.546786   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:02.546962   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:02.547070   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHUsername
	I0528 21:32:02.547246   53591 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:02.547431   53591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0528 21:32:02.547454   53591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-547166' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-547166/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-547166' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:32:02.662950   53591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:32:02.662980   53591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:32:02.663008   53591 buildroot.go:174] setting up certificates
	I0528 21:32:02.663023   53591 provision.go:84] configureAuth start
	I0528 21:32:02.663036   53591 main.go:141] libmachine: (pause-547166) Calling .GetMachineName
	I0528 21:32:02.663346   53591 main.go:141] libmachine: (pause-547166) Calling .GetIP
	I0528 21:32:02.666056   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.666425   53591 main.go:141] libmachine: (pause-547166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e9:57", ip: ""} in network mk-pause-547166: {Iface:virbr2 ExpiryTime:2024-05-28 22:30:39 +0000 UTC Type:0 Mac:52:54:00:de:e9:57 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:pause-547166 Clientid:01:52:54:00:de:e9:57}
	I0528 21:32:02.666451   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined IP address 192.168.50.108 and MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.666625   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHHostname
	I0528 21:32:02.668966   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.669399   53591 main.go:141] libmachine: (pause-547166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e9:57", ip: ""} in network mk-pause-547166: {Iface:virbr2 ExpiryTime:2024-05-28 22:30:39 +0000 UTC Type:0 Mac:52:54:00:de:e9:57 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:pause-547166 Clientid:01:52:54:00:de:e9:57}
	I0528 21:32:02.669429   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined IP address 192.168.50.108 and MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.669523   53591 provision.go:143] copyHostCerts
	I0528 21:32:02.669589   53591 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:32:02.669606   53591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:32:02.669672   53591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:32:02.669821   53591 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:32:02.669832   53591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:32:02.669865   53591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:32:02.669971   53591 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:32:02.669982   53591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:32:02.670009   53591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:32:02.670148   53591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.pause-547166 san=[127.0.0.1 192.168.50.108 localhost minikube pause-547166]
	I0528 21:32:02.739275   53591 provision.go:177] copyRemoteCerts
	I0528 21:32:02.739334   53591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:32:02.739362   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHHostname
	I0528 21:32:02.741993   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.742316   53591 main.go:141] libmachine: (pause-547166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e9:57", ip: ""} in network mk-pause-547166: {Iface:virbr2 ExpiryTime:2024-05-28 22:30:39 +0000 UTC Type:0 Mac:52:54:00:de:e9:57 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:pause-547166 Clientid:01:52:54:00:de:e9:57}
	I0528 21:32:02.742345   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined IP address 192.168.50.108 and MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.742530   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHPort
	I0528 21:32:02.742718   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:02.742851   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHUsername
	I0528 21:32:02.742984   53591 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/pause-547166/id_rsa Username:docker}
	I0528 21:32:02.829033   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:32:02.857460   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0528 21:32:02.884751   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 21:32:02.908574   53591 provision.go:87] duration metric: took 245.540033ms to configureAuth
	I0528 21:32:02.908602   53591 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:32:02.908816   53591 config.go:182] Loaded profile config "pause-547166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:32:02.908901   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHHostname
	I0528 21:32:02.911467   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.911774   53591 main.go:141] libmachine: (pause-547166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e9:57", ip: ""} in network mk-pause-547166: {Iface:virbr2 ExpiryTime:2024-05-28 22:30:39 +0000 UTC Type:0 Mac:52:54:00:de:e9:57 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:pause-547166 Clientid:01:52:54:00:de:e9:57}
	I0528 21:32:02.911803   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined IP address 192.168.50.108 and MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:02.911948   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHPort
	I0528 21:32:02.912152   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:02.912325   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:02.912476   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHUsername
	I0528 21:32:02.912630   53591 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:02.912801   53591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0528 21:32:02.912816   53591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:32:08.556072   53591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:32:08.556101   53591 machine.go:97] duration metric: took 6.273883667s to provisionDockerMachine
	I0528 21:32:08.556113   53591 start.go:293] postStartSetup for "pause-547166" (driver="kvm2")
	I0528 21:32:08.556125   53591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:32:08.556144   53591 main.go:141] libmachine: (pause-547166) Calling .DriverName
	I0528 21:32:08.556587   53591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:32:08.556616   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHHostname
	I0528 21:32:08.559595   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:08.559999   53591 main.go:141] libmachine: (pause-547166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e9:57", ip: ""} in network mk-pause-547166: {Iface:virbr2 ExpiryTime:2024-05-28 22:30:39 +0000 UTC Type:0 Mac:52:54:00:de:e9:57 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:pause-547166 Clientid:01:52:54:00:de:e9:57}
	I0528 21:32:08.560021   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined IP address 192.168.50.108 and MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:08.560207   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHPort
	I0528 21:32:08.560373   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:08.560533   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHUsername
	I0528 21:32:08.560664   53591 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/pause-547166/id_rsa Username:docker}
	I0528 21:32:08.653847   53591 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:32:08.659009   53591 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 21:32:08.659044   53591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:32:08.659104   53591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:32:08.659176   53591 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:32:08.659315   53591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:32:08.671166   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:32:08.704138   53591 start.go:296] duration metric: took 148.010868ms for postStartSetup
	I0528 21:32:08.704178   53591 fix.go:56] duration metric: took 6.4493986s for fixHost
	I0528 21:32:08.704197   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHHostname
	I0528 21:32:08.707615   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:08.708036   53591 main.go:141] libmachine: (pause-547166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e9:57", ip: ""} in network mk-pause-547166: {Iface:virbr2 ExpiryTime:2024-05-28 22:30:39 +0000 UTC Type:0 Mac:52:54:00:de:e9:57 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:pause-547166 Clientid:01:52:54:00:de:e9:57}
	I0528 21:32:08.708081   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined IP address 192.168.50.108 and MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:08.708326   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHPort
	I0528 21:32:08.708541   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:08.708747   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:08.708930   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHUsername
	I0528 21:32:08.709159   53591 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:08.709406   53591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.108 22 <nil> <nil>}
	I0528 21:32:08.709424   53591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0528 21:32:08.832794   53591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716931928.823041772
	
	I0528 21:32:08.832821   53591 fix.go:216] guest clock: 1716931928.823041772
	I0528 21:32:08.832829   53591 fix.go:229] Guest: 2024-05-28 21:32:08.823041772 +0000 UTC Remote: 2024-05-28 21:32:08.704181176 +0000 UTC m=+7.995394038 (delta=118.860596ms)
	I0528 21:32:08.832850   53591 fix.go:200] guest clock delta is within tolerance: 118.860596ms
	I0528 21:32:08.832858   53591 start.go:83] releasing machines lock for "pause-547166", held for 6.578109827s
	I0528 21:32:08.832882   53591 main.go:141] libmachine: (pause-547166) Calling .DriverName
	I0528 21:32:08.833182   53591 main.go:141] libmachine: (pause-547166) Calling .GetIP
	I0528 21:32:08.836257   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:08.836668   53591 main.go:141] libmachine: (pause-547166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e9:57", ip: ""} in network mk-pause-547166: {Iface:virbr2 ExpiryTime:2024-05-28 22:30:39 +0000 UTC Type:0 Mac:52:54:00:de:e9:57 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:pause-547166 Clientid:01:52:54:00:de:e9:57}
	I0528 21:32:08.836711   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined IP address 192.168.50.108 and MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:08.836833   53591 main.go:141] libmachine: (pause-547166) Calling .DriverName
	I0528 21:32:08.837431   53591 main.go:141] libmachine: (pause-547166) Calling .DriverName
	I0528 21:32:08.837623   53591 main.go:141] libmachine: (pause-547166) Calling .DriverName
	I0528 21:32:08.837735   53591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:32:08.837806   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHHostname
	I0528 21:32:08.837837   53591 ssh_runner.go:195] Run: cat /version.json
	I0528 21:32:08.837862   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHHostname
	I0528 21:32:08.840925   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:08.841004   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:08.841297   53591 main.go:141] libmachine: (pause-547166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e9:57", ip: ""} in network mk-pause-547166: {Iface:virbr2 ExpiryTime:2024-05-28 22:30:39 +0000 UTC Type:0 Mac:52:54:00:de:e9:57 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:pause-547166 Clientid:01:52:54:00:de:e9:57}
	I0528 21:32:08.841323   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined IP address 192.168.50.108 and MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:08.841347   53591 main.go:141] libmachine: (pause-547166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e9:57", ip: ""} in network mk-pause-547166: {Iface:virbr2 ExpiryTime:2024-05-28 22:30:39 +0000 UTC Type:0 Mac:52:54:00:de:e9:57 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:pause-547166 Clientid:01:52:54:00:de:e9:57}
	I0528 21:32:08.841365   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined IP address 192.168.50.108 and MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:08.841595   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHPort
	I0528 21:32:08.841776   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:08.841819   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHPort
	I0528 21:32:08.841989   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHUsername
	I0528 21:32:08.841998   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHKeyPath
	I0528 21:32:08.842173   53591 main.go:141] libmachine: (pause-547166) Calling .GetSSHUsername
	I0528 21:32:08.842170   53591 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/pause-547166/id_rsa Username:docker}
	I0528 21:32:08.842354   53591 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/pause-547166/id_rsa Username:docker}
	I0528 21:32:08.955746   53591 ssh_runner.go:195] Run: systemctl --version
	I0528 21:32:08.964973   53591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:32:09.141893   53591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 21:32:09.148353   53591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:32:09.148409   53591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:32:09.159271   53591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0528 21:32:09.159301   53591 start.go:494] detecting cgroup driver to use...
	I0528 21:32:09.159366   53591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:32:09.179765   53591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:32:09.195318   53591 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:32:09.195372   53591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:32:09.209575   53591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:32:09.226209   53591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:32:09.382734   53591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:32:09.521768   53591 docker.go:233] disabling docker service ...
	I0528 21:32:09.521850   53591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:32:09.540414   53591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:32:09.557217   53591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:32:09.715219   53591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:32:09.874461   53591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:32:09.893270   53591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:32:09.915782   53591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 21:32:09.915834   53591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:09.929445   53591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:32:09.929509   53591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:09.941248   53591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:09.952038   53591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:09.962722   53591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:32:09.973655   53591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:09.984754   53591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:10.008956   53591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:10.023181   53591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:32:10.035325   53591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:32:10.046943   53591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:32:10.204151   53591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:32:16.660445   53591 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.456249891s)
	I0528 21:32:16.660482   53591 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:32:16.660536   53591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:32:16.665947   53591 start.go:562] Will wait 60s for crictl version
	I0528 21:32:16.666010   53591 ssh_runner.go:195] Run: which crictl
	I0528 21:32:16.670417   53591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:32:16.710768   53591 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 21:32:16.710886   53591 ssh_runner.go:195] Run: crio --version
	I0528 21:32:16.742671   53591 ssh_runner.go:195] Run: crio --version
	I0528 21:32:16.775519   53591 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 21:32:16.776803   53591 main.go:141] libmachine: (pause-547166) Calling .GetIP
	I0528 21:32:16.779996   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:16.780443   53591 main.go:141] libmachine: (pause-547166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:e9:57", ip: ""} in network mk-pause-547166: {Iface:virbr2 ExpiryTime:2024-05-28 22:30:39 +0000 UTC Type:0 Mac:52:54:00:de:e9:57 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:pause-547166 Clientid:01:52:54:00:de:e9:57}
	I0528 21:32:16.780473   53591 main.go:141] libmachine: (pause-547166) DBG | domain pause-547166 has defined IP address 192.168.50.108 and MAC address 52:54:00:de:e9:57 in network mk-pause-547166
	I0528 21:32:16.780719   53591 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0528 21:32:16.785130   53591 kubeadm.go:877] updating cluster {Name:pause-547166 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1
ClusterName:pause-547166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:32:16.785243   53591 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:32:16.785283   53591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:32:16.829624   53591 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:32:16.829648   53591 crio.go:433] Images already preloaded, skipping extraction
	I0528 21:32:16.829701   53591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:32:16.861876   53591 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:32:16.861898   53591 cache_images.go:84] Images are preloaded, skipping loading
	I0528 21:32:16.861905   53591 kubeadm.go:928] updating node { 192.168.50.108 8443 v1.30.1 crio true true} ...
	I0528 21:32:16.862008   53591 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-547166 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:pause-547166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:32:16.862065   53591 ssh_runner.go:195] Run: crio config
	I0528 21:32:16.912474   53591 cni.go:84] Creating CNI manager for ""
	I0528 21:32:16.912496   53591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:32:16.912508   53591 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:32:16.912527   53591 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.108 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-547166 NodeName:pause-547166 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 21:32:16.912668   53591 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-547166"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.108
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.108"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:32:16.912720   53591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 21:32:16.922810   53591 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:32:16.922879   53591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:32:16.932325   53591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0528 21:32:16.949062   53591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:32:16.966023   53591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0528 21:32:16.987080   53591 ssh_runner.go:195] Run: grep 192.168.50.108	control-plane.minikube.internal$ /etc/hosts
	I0528 21:32:16.992031   53591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:32:17.138437   53591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:32:17.155892   53591 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/pause-547166 for IP: 192.168.50.108
	I0528 21:32:17.155913   53591 certs.go:194] generating shared ca certs ...
	I0528 21:32:17.155946   53591 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:32:17.156088   53591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 21:32:17.156123   53591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 21:32:17.156134   53591 certs.go:256] generating profile certs ...
	I0528 21:32:17.156213   53591 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/pause-547166/client.key
	I0528 21:32:17.156271   53591 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/pause-547166/apiserver.key.4840556a
	I0528 21:32:17.156303   53591 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/pause-547166/proxy-client.key
	I0528 21:32:17.156412   53591 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 21:32:17.156438   53591 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 21:32:17.156447   53591 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 21:32:17.156469   53591 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 21:32:17.156492   53591 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:32:17.156522   53591 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 21:32:17.156560   53591 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:32:17.157152   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:32:17.184934   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:32:17.213193   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:32:17.240084   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:32:17.266708   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/pause-547166/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0528 21:32:17.347216   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/pause-547166/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 21:32:17.604092   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/pause-547166/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:32:17.809858   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/pause-547166/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 21:32:17.918431   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 21:32:18.095861   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:32:18.157946   53591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 21:32:18.222893   53591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:32:18.264195   53591 ssh_runner.go:195] Run: openssl version
	I0528 21:32:18.290984   53591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 21:32:18.354940   53591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 21:32:18.366940   53591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:32:18.367091   53591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 21:32:18.389523   53591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 21:32:18.445988   53591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:32:18.466933   53591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:32:18.481096   53591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:32:18.481168   53591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:32:18.490324   53591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:32:18.501003   53591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 21:32:18.513903   53591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 21:32:18.520453   53591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:32:18.520511   53591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 21:32:18.528789   53591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 21:32:18.548260   53591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:32:18.562351   53591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 21:32:18.574547   53591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 21:32:18.589527   53591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 21:32:18.611936   53591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 21:32:18.627684   53591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 21:32:18.640798   53591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 21:32:18.652236   53591 kubeadm.go:391] StartCluster: {Name:pause-547166 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:pause-547166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:32:18.652400   53591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:32:18.652476   53591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:32:18.730598   53591 cri.go:89] found id: "2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19"
	I0528 21:32:18.730627   53591 cri.go:89] found id: "389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594"
	I0528 21:32:18.730632   53591 cri.go:89] found id: "1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de"
	I0528 21:32:18.730637   53591 cri.go:89] found id: "6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d"
	I0528 21:32:18.730641   53591 cri.go:89] found id: "7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607"
	I0528 21:32:18.730646   53591 cri.go:89] found id: "0b484f9e66876230b418bebec6d1b267cd656a2658a2615e7176040219b774af"
	I0528 21:32:18.730650   53591 cri.go:89] found id: "71b1b8358fcf18a64c7f6fbec02a6d812406a3f6d7f18f132e87081d13151e99"
	I0528 21:32:18.730654   53591 cri.go:89] found id: "cab49574f7e37cb652b7b1f0d5050ac849fb9e2487c9a9cb3d8a7b4f23b406e9"
	I0528 21:32:18.730658   53591 cri.go:89] found id: "608a9704db445db40e7de956940dc50cb98f710ceb6bad213f03325df2b4a83a"
	I0528 21:32:18.730666   53591 cri.go:89] found id: "e2dd991b81384a9ac70bab8a7397ea9f62eae5c73ef7a37ecbef81571a527a87"
	I0528 21:32:18.730670   53591 cri.go:89] found id: ""
	I0528 21:32:18.730723   53591 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-547166 -n pause-547166
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-547166 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-547166 logs -n 25: (3.564083834s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                    |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-110727 sudo cat                  | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo                      | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | cri-dockerd --version                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo                      | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | systemctl status containerd                |                           |         |         |                     |                     |
	|         | --all --full --no-pager                    |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-187083 sudo                | NoKubernetes-187083       | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | systemctl is-active --quiet                |                           |         |         |                     |                     |
	|         | service kubelet                            |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo                      | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | systemctl cat containerd                   |                           |         |         |                     |                     |
	|         | --no-pager                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo cat                  | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | /lib/systemd/system/containerd.service     |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo cat                  | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | /etc/containerd/config.toml                |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo                      | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | containerd config dump                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo                      | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | systemctl status crio --all                |                           |         |         |                     |                     |
	|         | --full --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo                      | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | systemctl cat crio --no-pager              |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo find                 | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | /etc/crio -type f -exec sh -c              |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                       |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo crio                 | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | config                                     |                           |         |         |                     |                     |
	| delete  | -p cilium-110727                           | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:30 UTC |
	| stop    | -p NoKubernetes-187083                     | NoKubernetes-187083       | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:30 UTC |
	| start   | -p pause-547166 --memory=2048              | pause-547166              | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:32 UTC |
	|         | --install-addons=false                     |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-185653                  | running-upgrade-185653    | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:30 UTC |
	| start   | -p NoKubernetes-187083                     | NoKubernetes-187083       | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:31 UTC |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-314578               | kubernetes-upgrade-314578 | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-187083 sudo                | NoKubernetes-187083       | jenkins | v1.33.1 | 28 May 24 21:31 UTC |                     |
	|         | systemctl is-active --quiet                |                           |         |         |                     |                     |
	|         | service kubelet                            |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-187083                     | NoKubernetes-187083       | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:31 UTC |
	| start   | -p stopped-upgrade-742900                  | minikube                  | jenkins | v1.26.0 | 28 May 24 21:31 UTC | 28 May 24 21:32 UTC |
	|         | --memory=2200 --vm-driver=kvm2             |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                  |                           |         |         |                     |                     |
	| start   | -p pause-547166                            | pause-547166              | jenkins | v1.33.1 | 28 May 24 21:32 UTC | 28 May 24 21:32 UTC |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-742900 stop                | minikube                  | jenkins | v1.26.0 | 28 May 24 21:32 UTC | 28 May 24 21:32 UTC |
	| start   | -p stopped-upgrade-742900                  | stopped-upgrade-742900    | jenkins | v1.33.1 | 28 May 24 21:32 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p cert-expiration-257793                  | cert-expiration-257793    | jenkins | v1.33.1 | 28 May 24 21:32 UTC |                     |
	|         | --memory=2048                              |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:32:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:32:34.242584   53940 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:32:34.242686   53940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:32:34.242690   53940 out.go:304] Setting ErrFile to fd 2...
	I0528 21:32:34.242693   53940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:32:34.242866   53940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:32:34.243441   53940 out.go:298] Setting JSON to false
	I0528 21:32:34.244488   53940 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4497,"bootTime":1716927457,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:32:34.244538   53940 start.go:139] virtualization: kvm guest
	I0528 21:32:34.246530   53940 out.go:177] * [cert-expiration-257793] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:32:34.248125   53940 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:32:34.249237   53940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:32:34.248200   53940 notify.go:220] Checking for updates...
	I0528 21:32:34.251796   53940 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:32:34.253232   53940 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:32:34.254661   53940 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:32:34.256058   53940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:32:34.257991   53940 config.go:182] Loaded profile config "cert-expiration-257793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:32:34.258593   53940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:32:34.258644   53940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:32:34.275148   53940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34037
	I0528 21:32:34.275603   53940 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:32:34.276197   53940 main.go:141] libmachine: Using API Version  1
	I0528 21:32:34.276217   53940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:32:34.276597   53940 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:32:34.276800   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .DriverName
	I0528 21:32:34.277093   53940 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:32:34.277511   53940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:32:34.277550   53940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:32:34.292745   53940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I0528 21:32:34.293130   53940 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:32:34.293750   53940 main.go:141] libmachine: Using API Version  1
	I0528 21:32:34.293789   53940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:32:34.294172   53940 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:32:34.294371   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .DriverName
	I0528 21:32:34.331119   53940 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:32:34.332304   53940 start.go:297] selected driver: kvm2
	I0528 21:32:34.332310   53940 start.go:901] validating driver "kvm2" against &{Name:cert-expiration-257793 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:cert-expiration-257793 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:32:34.332441   53940 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:32:34.333095   53940 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:32:34.333158   53940 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:32:34.350190   53940 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:32:34.350527   53940 cni.go:84] Creating CNI manager for ""
	I0528 21:32:34.350535   53940 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:32:34.350580   53940 start.go:340] cluster config:
	{Name:cert-expiration-257793 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:cert-expiration-257793 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:32:34.350737   53940 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:32:34.353521   53940 out.go:177] * Starting "cert-expiration-257793" primary control-plane node in "cert-expiration-257793" cluster
	I0528 21:32:32.070685   53591 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98 4012e4ffbdbc2847150dbd792548b836bf108cd924bb66995eb87805550bd1ea 2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19 389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594 1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de 6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d 7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607 0b484f9e66876230b418bebec6d1b267cd656a2658a2615e7176040219b774af 71b1b8358fcf18a64c7f6fbec02a6d812406a3f6d7f18f132e87081d13151e99 cab49574f7e37cb652b7b1f0d5050ac849fb9e2487c9a9cb3d8a7b4f23b406e9 608a9704db445db40e7de956940dc50cb98f710ceb6bad213f03325df2b4a83a e2dd991b81384a9ac70bab8a7397ea9f62eae5c73ef7a37ecbef81571a527a87: (13.079502055s)
	W0528 21:32:32.070762   53591 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98 4012e4ffbdbc2847150dbd792548b836bf108cd924bb66995eb87805550bd1ea 2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19 389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594 1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de 6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d 7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607 0b484f9e66876230b418bebec6d1b267cd656a2658a2615e7176040219b774af 71b1b8358fcf18a64c7f6fbec02a6d812406a3f6d7f18f132e87081d13151e99 cab49574f7e37cb652b7b1f0d5050ac849fb9e2487c9a9cb3d8a7b4f23b406e9 608a9704db445db40e7de956940dc50cb98f710ceb6bad213f03325df2b4a83a e2dd991b81384a9ac70bab8a7397ea9f62eae5c73ef7a37ecbef81571a527a87: Process exited with status 1
	stdout:
	3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98
	4012e4ffbdbc2847150dbd792548b836bf108cd924bb66995eb87805550bd1ea
	2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19
	389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594
	1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de
	6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d
	
	stderr:
	E0528 21:32:32.058500    3164 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607\": container with ID starting with 7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607 not found: ID does not exist" containerID="7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607"
	time="2024-05-28T21:32:32Z" level=fatal msg="stopping the container \"7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607\": rpc error: code = NotFound desc = could not find container \"7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607\": container with ID starting with 7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607 not found: ID does not exist"
	I0528 21:32:32.070816   53591 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 21:32:32.111727   53591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:32:32.122727   53591 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 May 28 21:30 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 May 28 21:30 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 May 28 21:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 May 28 21:30 /etc/kubernetes/scheduler.conf
	
	I0528 21:32:32.122783   53591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:32:32.132269   53591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:32:32.141481   53591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:32:32.151679   53591 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:32:32.151735   53591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:32:32.163796   53591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:32:32.174517   53591 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:32:32.174568   53591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:32:32.184310   53591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:32:32.194274   53591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:32:32.263540   53591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:32:33.090261   53591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:32:33.342804   53591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:32:33.422569   53591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:32:33.534409   53591 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:32:33.534496   53591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:32:34.035298   53591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:32:34.535002   53591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:32:34.553246   53591 api_server.go:72] duration metric: took 1.018835926s to wait for apiserver process to appear ...
	I0528 21:32:34.553281   53591 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:32:34.553303   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:31.540797   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .Start
	I0528 21:32:31.540947   53852 main.go:141] libmachine: (stopped-upgrade-742900) Ensuring networks are active...
	I0528 21:32:31.541794   53852 main.go:141] libmachine: (stopped-upgrade-742900) Ensuring network default is active
	I0528 21:32:31.542165   53852 main.go:141] libmachine: (stopped-upgrade-742900) Ensuring network mk-stopped-upgrade-742900 is active
	I0528 21:32:31.542537   53852 main.go:141] libmachine: (stopped-upgrade-742900) Getting domain xml...
	I0528 21:32:31.543124   53852 main.go:141] libmachine: (stopped-upgrade-742900) Creating domain...
	I0528 21:32:32.779953   53852 main.go:141] libmachine: (stopped-upgrade-742900) Waiting to get IP...
	I0528 21:32:32.780835   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:32.781260   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:32.781321   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:32.781228   53887 retry.go:31] will retry after 248.685661ms: waiting for machine to come up
	I0528 21:32:33.031975   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:33.032518   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:33.032547   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:33.032466   53887 retry.go:31] will retry after 241.510489ms: waiting for machine to come up
	I0528 21:32:33.275894   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:33.276373   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:33.276400   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:33.276300   53887 retry.go:31] will retry after 437.759362ms: waiting for machine to come up
	I0528 21:32:33.715917   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:33.716440   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:33.716465   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:33.716389   53887 retry.go:31] will retry after 385.209263ms: waiting for machine to come up
	I0528 21:32:34.103119   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:34.103520   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:34.103551   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:34.103475   53887 retry.go:31] will retry after 467.39146ms: waiting for machine to come up
	I0528 21:32:34.572088   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:34.572685   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:34.572716   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:34.572624   53887 retry.go:31] will retry after 768.631697ms: waiting for machine to come up
	I0528 21:32:35.342701   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:35.343237   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:35.343264   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:35.343199   53887 retry.go:31] will retry after 799.0965ms: waiting for machine to come up
	I0528 21:32:36.144236   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:36.144769   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:36.144816   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:36.144714   53887 retry.go:31] will retry after 1.244270656s: waiting for machine to come up
	I0528 21:32:36.923209   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:32:36.923243   53591 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:32:36.923261   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:36.988913   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:32:36.988938   53591 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:32:37.054146   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:37.067176   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:32:37.067201   53591 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:32:37.554133   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:37.559394   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:32:37.559415   53591 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:32:38.053589   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:38.059455   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:32:38.059479   53591 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:32:38.554091   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:38.558328   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0528 21:32:38.564767   53591 api_server.go:141] control plane version: v1.30.1
	I0528 21:32:38.564793   53591 api_server.go:131] duration metric: took 4.011504281s to wait for apiserver health ...
	I0528 21:32:38.564803   53591 cni.go:84] Creating CNI manager for ""
	I0528 21:32:38.564811   53591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:32:38.566498   53591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 21:32:34.354841   53940 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:32:34.354878   53940 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 21:32:34.354885   53940 cache.go:56] Caching tarball of preloaded images
	I0528 21:32:34.354995   53940 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:32:34.355005   53940 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 21:32:34.355137   53940 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/cert-expiration-257793/config.json ...
	I0528 21:32:34.355402   53940 start.go:360] acquireMachinesLock for cert-expiration-257793: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:32:38.567919   53591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 21:32:38.582752   53591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 21:32:38.600822   53591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:32:38.609489   53591 system_pods.go:59] 6 kube-system pods found
	I0528 21:32:38.609512   53591 system_pods.go:61] "coredns-7db6d8ff4d-7rb9n" [4e37fe79-cc67-4012-93b6-79ecc1f88ec7] Running
	I0528 21:32:38.609519   53591 system_pods.go:61] "etcd-pause-547166" [d9bfd727-090c-447f-8d1c-fb41302a4f99] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 21:32:38.609526   53591 system_pods.go:61] "kube-apiserver-pause-547166" [9bfb145c-9adf-4ba1-b909-e5d1fc40a080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 21:32:38.609533   53591 system_pods.go:61] "kube-controller-manager-pause-547166" [605d47d8-50b2-4b0c-8ea3-0da1a7ce121a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 21:32:38.609540   53591 system_pods.go:61] "kube-proxy-94v5m" [b8bf4bf8-52a8-4277-a373-bbeef065c3f5] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 21:32:38.609550   53591 system_pods.go:61] "kube-scheduler-pause-547166" [262f8e41-4c82-4d7a-8f49-7be7a940bd96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 21:32:38.609560   53591 system_pods.go:74] duration metric: took 8.72029ms to wait for pod list to return data ...
	I0528 21:32:38.609570   53591 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:32:38.612535   53591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:32:38.612557   53591 node_conditions.go:123] node cpu capacity is 2
	I0528 21:32:38.612572   53591 node_conditions.go:105] duration metric: took 2.992642ms to run NodePressure ...
	I0528 21:32:38.612590   53591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:32:38.892573   53591 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0528 21:32:38.896612   53591 kubeadm.go:733] kubelet initialised
	I0528 21:32:38.896639   53591 kubeadm.go:734] duration metric: took 4.038221ms waiting for restarted kubelet to initialise ...
	I0528 21:32:38.896650   53591 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:32:38.900931   53591 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7rb9n" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:38.906523   53591 pod_ready.go:92] pod "coredns-7db6d8ff4d-7rb9n" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:38.906548   53591 pod_ready.go:81] duration metric: took 5.594734ms for pod "coredns-7db6d8ff4d-7rb9n" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:38.906561   53591 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:37.390639   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:37.391129   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:37.391157   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:37.391094   53887 retry.go:31] will retry after 1.203886087s: waiting for machine to come up
	I0528 21:32:38.596302   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:38.596841   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:38.596869   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:38.596793   53887 retry.go:31] will retry after 1.790511234s: waiting for machine to come up
	I0528 21:32:40.388647   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:40.389227   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:40.389266   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:40.389161   53887 retry.go:31] will retry after 2.288302933s: waiting for machine to come up
	I0528 21:32:40.913817   53591 pod_ready.go:102] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:42.914013   53591 pod_ready.go:102] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:45.412639   53591 pod_ready.go:102] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:42.678884   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:42.679427   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:42.679458   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:42.679372   53887 retry.go:31] will retry after 2.516621293s: waiting for machine to come up
	I0528 21:32:45.197084   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:45.197584   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:45.197640   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:45.197545   53887 retry.go:31] will retry after 4.476953608s: waiting for machine to come up
	I0528 21:32:43.235439   52629 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:32:43.235648   52629 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:32:47.413533   53591 pod_ready.go:102] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:49.913943   53591 pod_ready.go:102] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:51.062411   53940 start.go:364] duration metric: took 16.706975401s to acquireMachinesLock for "cert-expiration-257793"
	I0528 21:32:51.062450   53940 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:32:51.062455   53940 fix.go:54] fixHost starting: 
	I0528 21:32:51.062849   53940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:32:51.062888   53940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:32:51.079853   53940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0528 21:32:51.080196   53940 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:32:51.080698   53940 main.go:141] libmachine: Using API Version  1
	I0528 21:32:51.080720   53940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:32:51.081018   53940 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:32:51.081180   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .DriverName
	I0528 21:32:51.081304   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetState
	I0528 21:32:51.082936   53940 fix.go:112] recreateIfNeeded on cert-expiration-257793: state=Running err=<nil>
	W0528 21:32:51.082951   53940 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:32:51.084493   53940 out.go:177] * Updating the running kvm2 "cert-expiration-257793" VM ...
	I0528 21:32:49.678422   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.678959   53852 main.go:141] libmachine: (stopped-upgrade-742900) Found IP for machine: 192.168.61.251
	I0528 21:32:49.678986   53852 main.go:141] libmachine: (stopped-upgrade-742900) Reserving static IP address...
	I0528 21:32:49.679019   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has current primary IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.679483   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "stopped-upgrade-742900", mac: "52:54:00:d5:2c:e6", ip: "192.168.61.251"} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:49.679515   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | skip adding static IP to network mk-stopped-upgrade-742900 - found existing host DHCP lease matching {name: "stopped-upgrade-742900", mac: "52:54:00:d5:2c:e6", ip: "192.168.61.251"}
	I0528 21:32:49.679542   53852 main.go:141] libmachine: (stopped-upgrade-742900) Reserved static IP address: 192.168.61.251
	I0528 21:32:49.679561   53852 main.go:141] libmachine: (stopped-upgrade-742900) Waiting for SSH to be available...
	I0528 21:32:49.679576   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | Getting to WaitForSSH function...
	I0528 21:32:49.681793   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.682251   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:49.682290   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.682369   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | Using SSH client type: external
	I0528 21:32:49.682399   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/stopped-upgrade-742900/id_rsa (-rw-------)
	I0528 21:32:49.682440   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/stopped-upgrade-742900/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 21:32:49.682459   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | About to run SSH command:
	I0528 21:32:49.682479   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | exit 0
	I0528 21:32:49.777634   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | SSH cmd err, output: <nil>: 
	I0528 21:32:49.778016   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetConfigRaw
	I0528 21:32:49.778668   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetIP
	I0528 21:32:49.781414   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.781911   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:49.781936   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.782184   53852 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/stopped-upgrade-742900/config.json ...
	I0528 21:32:49.782408   53852 machine.go:94] provisionDockerMachine start ...
	I0528 21:32:49.782433   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .DriverName
	I0528 21:32:49.782639   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:49.785397   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.785813   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:49.785834   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.785989   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:49.786160   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:49.786357   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:49.786531   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:49.786743   53852 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:49.786987   53852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I0528 21:32:49.787004   53852 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:32:49.914247   53852 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 21:32:49.914279   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetMachineName
	I0528 21:32:49.914516   53852 buildroot.go:166] provisioning hostname "stopped-upgrade-742900"
	I0528 21:32:49.914537   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetMachineName
	I0528 21:32:49.914749   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:49.916949   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.917301   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:49.917318   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.917537   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:49.917697   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:49.917869   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:49.918002   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:49.918166   53852 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:49.918323   53852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I0528 21:32:49.918333   53852 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-742900 && echo "stopped-upgrade-742900" | sudo tee /etc/hostname
	I0528 21:32:50.052046   53852 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-742900
	
	I0528 21:32:50.052076   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:50.054728   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.055139   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.055172   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.055343   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:50.055560   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.055805   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.055974   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:50.056136   53852 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:50.056311   53852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I0528 21:32:50.056327   53852 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-742900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-742900/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-742900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:32:50.185464   53852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:32:50.185493   53852 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:32:50.185544   53852 buildroot.go:174] setting up certificates
	I0528 21:32:50.185559   53852 provision.go:84] configureAuth start
	I0528 21:32:50.185575   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetMachineName
	I0528 21:32:50.185873   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetIP
	I0528 21:32:50.188803   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.189242   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.189280   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.189482   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:50.191777   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.192159   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.192194   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.192334   53852 provision.go:143] copyHostCerts
	I0528 21:32:50.192397   53852 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:32:50.192420   53852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:32:50.192502   53852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:32:50.192627   53852 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:32:50.192640   53852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:32:50.192679   53852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:32:50.192772   53852 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:32:50.192782   53852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:32:50.192814   53852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:32:50.192904   53852 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-742900 san=[127.0.0.1 192.168.61.251 localhost minikube stopped-upgrade-742900]
	I0528 21:32:50.352058   53852 provision.go:177] copyRemoteCerts
	I0528 21:32:50.352113   53852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:32:50.352137   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:50.354621   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.354940   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.354973   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.355129   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:50.355323   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.355500   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:50.355616   53852 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/stopped-upgrade-742900/id_rsa Username:docker}
	I0528 21:32:50.450765   53852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0528 21:32:50.470977   53852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 21:32:50.490550   53852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:32:50.510285   53852 provision.go:87] duration metric: took 324.711205ms to configureAuth
	I0528 21:32:50.510316   53852 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:32:50.510528   53852 config.go:182] Loaded profile config "stopped-upgrade-742900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0528 21:32:50.510615   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:50.513188   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.513543   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.513572   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.513838   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:50.514012   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.514174   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.514312   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:50.514502   53852 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:50.514676   53852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I0528 21:32:50.514696   53852 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:32:50.806228   53852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:32:50.806276   53852 machine.go:97] duration metric: took 1.023844629s to provisionDockerMachine
	I0528 21:32:50.806289   53852 start.go:293] postStartSetup for "stopped-upgrade-742900" (driver="kvm2")
	I0528 21:32:50.806303   53852 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:32:50.806332   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .DriverName
	I0528 21:32:50.806608   53852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:32:50.806649   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:50.809185   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.809535   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.809562   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.809737   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:50.809959   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.810169   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:50.810353   53852 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/stopped-upgrade-742900/id_rsa Username:docker}
	I0528 21:32:50.900822   53852 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:32:50.905507   53852 info.go:137] Remote host: Buildroot 2021.02.12
	I0528 21:32:50.905527   53852 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:32:50.905594   53852 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:32:50.905700   53852 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:32:50.905819   53852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:32:50.916302   53852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:32:50.936446   53852 start.go:296] duration metric: took 130.143804ms for postStartSetup
	I0528 21:32:50.936482   53852 fix.go:56] duration metric: took 19.414735564s for fixHost
	I0528 21:32:50.936500   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:50.939307   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.939636   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.939664   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.939860   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:50.940049   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.940224   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.940396   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:50.940604   53852 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:50.940777   53852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I0528 21:32:50.940788   53852 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 21:32:51.062245   53852 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716931971.017672263
	
	I0528 21:32:51.062268   53852 fix.go:216] guest clock: 1716931971.017672263
	I0528 21:32:51.062277   53852 fix.go:229] Guest: 2024-05-28 21:32:51.017672263 +0000 UTC Remote: 2024-05-28 21:32:50.936485219 +0000 UTC m=+19.553121433 (delta=81.187044ms)
	I0528 21:32:51.062328   53852 fix.go:200] guest clock delta is within tolerance: 81.187044ms
	I0528 21:32:51.062336   53852 start.go:83] releasing machines lock for "stopped-upgrade-742900", held for 19.540609295s
	I0528 21:32:51.062368   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .DriverName
	I0528 21:32:51.062642   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetIP
	I0528 21:32:51.065448   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:51.065853   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:51.065881   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:51.066074   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .DriverName
	I0528 21:32:51.066595   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .DriverName
	I0528 21:32:51.066760   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .DriverName
	I0528 21:32:51.066830   53852 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:32:51.066863   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:51.067005   53852 ssh_runner.go:195] Run: cat /version.json
	I0528 21:32:51.067035   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:51.069844   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:51.069934   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:51.070310   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:51.070340   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:51.070366   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:51.070405   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:51.070549   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:51.070665   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:51.070752   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:51.070823   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:51.070897   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:51.070964   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:51.071048   53852 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/stopped-upgrade-742900/id_rsa Username:docker}
	I0528 21:32:51.071086   53852 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/stopped-upgrade-742900/id_rsa Username:docker}
	W0528 21:32:51.184101   53852 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0528 21:32:51.184192   53852 ssh_runner.go:195] Run: systemctl --version
	I0528 21:32:51.191060   53852 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:32:51.334557   53852 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 21:32:51.342304   53852 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:32:51.342371   53852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:32:51.355983   53852 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 21:32:51.356003   53852 start.go:494] detecting cgroup driver to use...
	I0528 21:32:51.356060   53852 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:32:51.372760   53852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:32:51.386157   53852 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:32:51.386221   53852 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:32:51.399351   53852 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:32:51.411327   53852 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:32:51.520656   53852 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:32:51.652706   53852 docker.go:233] disabling docker service ...
	I0528 21:32:51.652779   53852 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:32:51.665448   53852 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:32:51.676694   53852 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:32:51.790493   53852 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:32:51.898846   53852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:32:51.911016   53852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:32:51.929233   53852 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0528 21:32:51.929297   53852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:51.939059   53852 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:32:51.939114   53852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:51.948804   53852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:51.957789   53852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:51.966712   53852 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:32:51.974992   53852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:51.983233   53852 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:51.998184   53852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:52.006463   53852 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:32:52.013649   53852 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 21:32:52.013699   53852 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 21:32:52.024344   53852 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:32:52.032611   53852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:32:52.144412   53852 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:32:52.266180   53852 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:32:52.266264   53852 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:32:52.271363   53852 start.go:562] Will wait 60s for crictl version
	I0528 21:32:52.271423   53852 ssh_runner.go:195] Run: which crictl
	I0528 21:32:52.275043   53852 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:32:52.312183   53852 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I0528 21:32:52.312288   53852 ssh_runner.go:195] Run: crio --version
	I0528 21:32:52.346374   53852 ssh_runner.go:195] Run: crio --version
	I0528 21:32:52.384424   53852 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	I0528 21:32:51.413978   53591 pod_ready.go:92] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:51.413999   53591 pod_ready.go:81] duration metric: took 12.507429981s for pod "etcd-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:51.414009   53591 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.424142   53591 pod_ready.go:102] pod "kube-apiserver-pause-547166" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:53.922278   53591 pod_ready.go:92] pod "kube-apiserver-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:53.922309   53591 pod_ready.go:81] duration metric: took 2.508292133s for pod "kube-apiserver-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.922322   53591 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.928408   53591 pod_ready.go:92] pod "kube-controller-manager-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:53.928432   53591 pod_ready.go:81] duration metric: took 6.100929ms for pod "kube-controller-manager-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.928444   53591 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-94v5m" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.934693   53591 pod_ready.go:92] pod "kube-proxy-94v5m" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:53.934720   53591 pod_ready.go:81] duration metric: took 6.267751ms for pod "kube-proxy-94v5m" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.934733   53591 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.940475   53591 pod_ready.go:92] pod "kube-scheduler-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:53.940500   53591 pod_ready.go:81] duration metric: took 5.757636ms for pod "kube-scheduler-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.940510   53591 pod_ready.go:38] duration metric: took 15.043850969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:32:53.940530   53591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 21:32:53.958826   53591 ops.go:34] apiserver oom_adj: -16
	I0528 21:32:53.958851   53591 kubeadm.go:591] duration metric: took 35.087407147s to restartPrimaryControlPlane
	I0528 21:32:53.958863   53591 kubeadm.go:393] duration metric: took 35.30663683s to StartCluster
	I0528 21:32:53.958885   53591 settings.go:142] acquiring lock: {Name:mkc1182212d780ab38539839372c25eb989b2e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:32:53.958960   53591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:32:53.959804   53591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:32:53.960053   53591 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 21:32:53.964106   53591 out.go:177] * Verifying Kubernetes components...
	I0528 21:32:53.960160   53591 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 21:32:53.960417   53591 config.go:182] Loaded profile config "pause-547166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:32:53.965624   53591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:32:53.966867   53591 out.go:177] * Enabled addons: 
	I0528 21:32:51.085630   53940 machine.go:94] provisionDockerMachine start ...
	I0528 21:32:51.085645   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .DriverName
	I0528 21:32:51.085818   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHHostname
	I0528 21:32:51.088161   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.088591   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.088622   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.088790   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHPort
	I0528 21:32:51.088943   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.089117   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.089308   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHUsername
	I0528 21:32:51.089489   53940 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:51.089659   53940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.246 22 <nil> <nil>}
	I0528 21:32:51.089664   53940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:32:51.211551   53940 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-257793
	
	I0528 21:32:51.211568   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetMachineName
	I0528 21:32:51.211868   53940 buildroot.go:166] provisioning hostname "cert-expiration-257793"
	I0528 21:32:51.211888   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetMachineName
	I0528 21:32:51.212115   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHHostname
	I0528 21:32:51.215082   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.215583   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.215609   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.215679   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHPort
	I0528 21:32:51.215864   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.216005   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.216136   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHUsername
	I0528 21:32:51.216343   53940 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:51.216550   53940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.246 22 <nil> <nil>}
	I0528 21:32:51.216558   53940 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-257793 && echo "cert-expiration-257793" | sudo tee /etc/hostname
	I0528 21:32:51.353068   53940 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-257793
	
	I0528 21:32:51.353082   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHHostname
	I0528 21:32:51.355955   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.356338   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.356374   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.356601   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHPort
	I0528 21:32:51.356787   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.356972   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.357137   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHUsername
	I0528 21:32:51.357403   53940 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:51.357609   53940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.246 22 <nil> <nil>}
	I0528 21:32:51.357626   53940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-257793' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-257793/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-257793' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:32:51.474959   53940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:32:51.474976   53940 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:32:51.474993   53940 buildroot.go:174] setting up certificates
	I0528 21:32:51.475002   53940 provision.go:84] configureAuth start
	I0528 21:32:51.475039   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetMachineName
	I0528 21:32:51.475335   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetIP
	I0528 21:32:51.478216   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.478688   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.478712   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.478899   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHHostname
	I0528 21:32:51.481380   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.481784   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.481810   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.481905   53940 provision.go:143] copyHostCerts
	I0528 21:32:51.481963   53940 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:32:51.481972   53940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:32:51.482020   53940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:32:51.482166   53940 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:32:51.482170   53940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:32:51.482191   53940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:32:51.482252   53940 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:32:51.482257   53940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:32:51.482296   53940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:32:51.482362   53940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-257793 san=[127.0.0.1 192.168.72.246 cert-expiration-257793 localhost minikube]
	I0528 21:32:51.730091   53940 provision.go:177] copyRemoteCerts
	I0528 21:32:51.730135   53940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:32:51.730155   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHHostname
	I0528 21:32:51.733028   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.733385   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.733414   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.733599   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHPort
	I0528 21:32:51.733856   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.734054   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHUsername
	I0528 21:32:51.734205   53940 sshutil.go:53] new ssh client: &{IP:192.168.72.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/cert-expiration-257793/id_rsa Username:docker}
	I0528 21:32:51.828230   53940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:32:51.857890   53940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0528 21:32:51.883847   53940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 21:32:51.910068   53940 provision.go:87] duration metric: took 435.055759ms to configureAuth
	I0528 21:32:51.910087   53940 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:32:51.910313   53940 config.go:182] Loaded profile config "cert-expiration-257793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:32:51.910392   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHHostname
	I0528 21:32:51.913668   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.914152   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.914172   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.914516   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHPort
	I0528 21:32:51.914765   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.914915   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.915085   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHUsername
	I0528 21:32:51.915279   53940 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:51.915472   53940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.246 22 <nil> <nil>}
	I0528 21:32:51.915482   53940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:32:53.968094   53591 addons.go:510] duration metric: took 7.938072ms for enable addons: enabled=[]
	I0528 21:32:54.157343   53591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:32:54.177591   53591 node_ready.go:35] waiting up to 6m0s for node "pause-547166" to be "Ready" ...
	I0528 21:32:54.180983   53591 node_ready.go:49] node "pause-547166" has status "Ready":"True"
	I0528 21:32:54.181011   53591 node_ready.go:38] duration metric: took 3.378007ms for node "pause-547166" to be "Ready" ...
	I0528 21:32:54.181019   53591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:32:54.186075   53591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7rb9n" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:54.318716   53591 pod_ready.go:92] pod "coredns-7db6d8ff4d-7rb9n" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:54.318740   53591 pod_ready.go:81] duration metric: took 132.642486ms for pod "coredns-7db6d8ff4d-7rb9n" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:54.318749   53591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:54.718452   53591 pod_ready.go:92] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:54.718475   53591 pod_ready.go:81] duration metric: took 399.720706ms for pod "etcd-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:54.718483   53591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:55.118676   53591 pod_ready.go:92] pod "kube-apiserver-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:55.118700   53591 pod_ready.go:81] duration metric: took 400.211491ms for pod "kube-apiserver-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:55.118709   53591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:55.518015   53591 pod_ready.go:92] pod "kube-controller-manager-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:55.518059   53591 pod_ready.go:81] duration metric: took 399.342005ms for pod "kube-controller-manager-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:55.518082   53591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-94v5m" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:52.385621   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetIP
	I0528 21:32:52.388183   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:52.388481   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:52.388527   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:52.388735   53852 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0528 21:32:52.392824   53852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:32:52.404098   53852 kubeadm.go:877] updating cluster {Name:stopped-upgrade-742900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stop
ped-upgrade-742900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.251 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0528 21:32:52.404213   53852 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0528 21:32:52.404283   53852 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:32:52.445101   53852 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0528 21:32:52.445178   53852 ssh_runner.go:195] Run: which lz4
	I0528 21:32:52.449063   53852 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 21:32:52.453241   53852 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 21:32:52.453269   53852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	I0528 21:32:54.072247   53852 crio.go:462] duration metric: took 1.623205879s to copy over tarball
	I0528 21:32:54.072333   53852 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 21:32:55.918489   53591 pod_ready.go:92] pod "kube-proxy-94v5m" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:55.918524   53591 pod_ready.go:81] duration metric: took 400.434487ms for pod "kube-proxy-94v5m" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:55.918538   53591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:56.318565   53591 pod_ready.go:92] pod "kube-scheduler-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:56.318596   53591 pod_ready.go:81] duration metric: took 400.049782ms for pod "kube-scheduler-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:56.318607   53591 pod_ready.go:38] duration metric: took 2.137578666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:32:56.318625   53591 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:32:56.318691   53591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:32:56.332483   53591 api_server.go:72] duration metric: took 2.372396031s to wait for apiserver process to appear ...
	I0528 21:32:56.332512   53591 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:32:56.332532   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:56.336846   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0528 21:32:56.337934   53591 api_server.go:141] control plane version: v1.30.1
	I0528 21:32:56.337968   53591 api_server.go:131] duration metric: took 5.44728ms to wait for apiserver health ...
	I0528 21:32:56.337978   53591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:32:56.520524   53591 system_pods.go:59] 6 kube-system pods found
	I0528 21:32:56.520562   53591 system_pods.go:61] "coredns-7db6d8ff4d-7rb9n" [4e37fe79-cc67-4012-93b6-79ecc1f88ec7] Running
	I0528 21:32:56.520569   53591 system_pods.go:61] "etcd-pause-547166" [d9bfd727-090c-447f-8d1c-fb41302a4f99] Running
	I0528 21:32:56.520574   53591 system_pods.go:61] "kube-apiserver-pause-547166" [9bfb145c-9adf-4ba1-b909-e5d1fc40a080] Running
	I0528 21:32:56.520579   53591 system_pods.go:61] "kube-controller-manager-pause-547166" [605d47d8-50b2-4b0c-8ea3-0da1a7ce121a] Running
	I0528 21:32:56.520584   53591 system_pods.go:61] "kube-proxy-94v5m" [b8bf4bf8-52a8-4277-a373-bbeef065c3f5] Running
	I0528 21:32:56.520589   53591 system_pods.go:61] "kube-scheduler-pause-547166" [262f8e41-4c82-4d7a-8f49-7be7a940bd96] Running
	I0528 21:32:56.520596   53591 system_pods.go:74] duration metric: took 182.608276ms to wait for pod list to return data ...
	I0528 21:32:56.520620   53591 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:32:56.718634   53591 default_sa.go:45] found service account: "default"
	I0528 21:32:56.718658   53591 default_sa.go:55] duration metric: took 198.026927ms for default service account to be created ...
	I0528 21:32:56.718666   53591 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:32:56.921427   53591 system_pods.go:86] 6 kube-system pods found
	I0528 21:32:56.921461   53591 system_pods.go:89] "coredns-7db6d8ff4d-7rb9n" [4e37fe79-cc67-4012-93b6-79ecc1f88ec7] Running
	I0528 21:32:56.921469   53591 system_pods.go:89] "etcd-pause-547166" [d9bfd727-090c-447f-8d1c-fb41302a4f99] Running
	I0528 21:32:56.921476   53591 system_pods.go:89] "kube-apiserver-pause-547166" [9bfb145c-9adf-4ba1-b909-e5d1fc40a080] Running
	I0528 21:32:56.921482   53591 system_pods.go:89] "kube-controller-manager-pause-547166" [605d47d8-50b2-4b0c-8ea3-0da1a7ce121a] Running
	I0528 21:32:56.921488   53591 system_pods.go:89] "kube-proxy-94v5m" [b8bf4bf8-52a8-4277-a373-bbeef065c3f5] Running
	I0528 21:32:56.921499   53591 system_pods.go:89] "kube-scheduler-pause-547166" [262f8e41-4c82-4d7a-8f49-7be7a940bd96] Running
	I0528 21:32:56.921509   53591 system_pods.go:126] duration metric: took 202.837672ms to wait for k8s-apps to be running ...
	I0528 21:32:56.921523   53591 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:32:56.921583   53591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:32:56.944951   53591 system_svc.go:56] duration metric: took 23.421521ms WaitForService to wait for kubelet
	I0528 21:32:56.944982   53591 kubeadm.go:576] duration metric: took 2.984898909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:32:56.945013   53591 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:32:57.119937   53591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:32:57.119969   53591 node_conditions.go:123] node cpu capacity is 2
	I0528 21:32:57.119996   53591 node_conditions.go:105] duration metric: took 174.977103ms to run NodePressure ...
	I0528 21:32:57.120011   53591 start.go:240] waiting for startup goroutines ...
	I0528 21:32:57.120024   53591 start.go:245] waiting for cluster config update ...
	I0528 21:32:57.120038   53591 start.go:254] writing updated cluster config ...
	I0528 21:32:57.120464   53591 ssh_runner.go:195] Run: rm -f paused
	I0528 21:32:57.177813   53591 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:32:57.180081   53591 out.go:177] * Done! kubectl is now configured to use "pause-547166" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.642078587Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec58ee6e-053f-4d81-8cf1-dc7e50dd5a41 name=/runtime.v1.RuntimeService/Version
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.643276999Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5bfbe2d-5bee-4bab-9c19-56bccc271b69 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.643646664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931979643624684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5bfbe2d-5bee-4bab-9c19-56bccc271b69 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.644581587Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1cdae1b-00d9-4025-a2b6-899d4bfd44db name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.644657664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1cdae1b-00d9-4025-a2b6-899d4bfd44db name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.644968268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08b452587905adc995d6be896ca09b4cdc8f7e2c8006c1dde8f840780234c73f,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716931957799469178,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226a0aa4c06bac0f992b982b30b88bf61cec228340067b5bdcf44890fa7c7909,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716931954043668132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38033e90e0d8625417b7232d5fc9711d210d72e3357eb81d5502488fd8e0d2c,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716931954010754734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df6e2b0ae6cae289b552e0b07bcf56cf39f7eea198d2ca44e9370b5edb0395b,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716931953999091903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bc6b0cdba4c5acc81d7fd9a4fb877957755ca949984c6fe374005306c9e3da,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716931953987479961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b435d362fd1970b9f1fca4268df3152c9a49524530b52b325893dcca65a19e,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_CREATED,CreatedAt:1716931952006438463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:c9943d50da1bc58e17ce66130e50e9d6741f72e20b97ac6b5c28cf86486623dd,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716931949497318043,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\
"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716931938586698177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.port
s: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716931937823461156,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pa
use-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716931937808886708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716931937725314411,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716931937585135604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1cdae1b-00d9-4025-a2b6-899d4bfd44db name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.687358888Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fb01985-0988-42c7-91a7-0900c8014883 name=/runtime.v1.RuntimeService/Version
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.687439472Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fb01985-0988-42c7-91a7-0900c8014883 name=/runtime.v1.RuntimeService/Version
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.688480737Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3461f0cb-a003-434a-9ba5-20cf242d1091 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.689024845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931979689000957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3461f0cb-a003-434a-9ba5-20cf242d1091 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.690298753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=466ba0fa-a85c-4040-ace2-b118f41a6f29 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.690371174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=466ba0fa-a85c-4040-ace2-b118f41a6f29 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.690803944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08b452587905adc995d6be896ca09b4cdc8f7e2c8006c1dde8f840780234c73f,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716931957799469178,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226a0aa4c06bac0f992b982b30b88bf61cec228340067b5bdcf44890fa7c7909,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716931954043668132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38033e90e0d8625417b7232d5fc9711d210d72e3357eb81d5502488fd8e0d2c,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716931954010754734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df6e2b0ae6cae289b552e0b07bcf56cf39f7eea198d2ca44e9370b5edb0395b,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716931953999091903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bc6b0cdba4c5acc81d7fd9a4fb877957755ca949984c6fe374005306c9e3da,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716931953987479961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b435d362fd1970b9f1fca4268df3152c9a49524530b52b325893dcca65a19e,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_CREATED,CreatedAt:1716931952006438463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:c9943d50da1bc58e17ce66130e50e9d6741f72e20b97ac6b5c28cf86486623dd,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716931949497318043,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\
"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716931938586698177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.port
s: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716931937823461156,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pa
use-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716931937808886708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716931937725314411,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716931937585135604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=466ba0fa-a85c-4040-ace2-b118f41a6f29 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.740361632Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=6cb423c2-8558-4e46-8717-f5c010a13b8a name=/runtime.v1.RuntimeService/ListPodSandbox
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.740576177Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7rb9n,Uid:4e37fe79-cc67-4012-93b6-79ecc1f88ec7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716931937635402998,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-28T21:31:18.676578751Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&PodSandboxMetadata{Name:etcd-pause-547166,Uid:347a9de7fc20962e4a0a09ef87e54be4,Namespace:kube-system,Attempt:1,
},State:SANDBOX_READY,CreatedAt:1716931937393823927,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.108:2379,kubernetes.io/config.hash: 347a9de7fc20962e4a0a09ef87e54be4,kubernetes.io/config.seen: 2024-05-28T21:31:03.955958040Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&PodSandboxMetadata{Name:kube-proxy-94v5m,Uid:b8bf4bf8-52a8-4277-a373-bbeef065c3f5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716931937376245109,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
b8bf4bf8-52a8-4277-a373-bbeef065c3f5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-28T21:31:18.395273746Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-547166,Uid:fb895cce6b7bf17c8ad4004a2ee11778,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716931937372453070,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fb895cce6b7bf17c8ad4004a2ee11778,kubernetes.io/config.seen: 2024-05-28T21:31:03.955956981Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844
e1a5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-547166,Uid:2b2509cd598b212d9d9a62337e8e8714,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716931937337172270,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2509cd598b212d9d9a62337e8e8714,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.108:8443,kubernetes.io/config.hash: 2b2509cd598b212d9d9a62337e8e8714,kubernetes.io/config.seen: 2024-05-28T21:31:03.955952292Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-547166,Uid:80a357c6b54cb6407e6733b556b49073,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1716931937318468367,Labels:map[string]str
ing{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a357c6b54cb6407e6733b556b49073,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 80a357c6b54cb6407e6733b556b49073,kubernetes.io/config.seen: 2024-05-28T21:31:03.955956015Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:39414e4c52956bb782ca7c228e8ae9c9842eb01a5675527092c07a23d843bdce,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-lj6nb,Uid:4f0b188f-5830-4645-80d4-dd8cf560cc7c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1716931878890169088,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-lj6nb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f0b188f-5830-4645-80d4-dd8cf560cc7c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-05-28T21:31:18.562169928Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6cb423c2-8558-4e46-8717-f5c010a13b8a name=/runtime.v1.RuntimeService/ListPodSandbox
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.741240734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ec0edf7-a101-4c5f-9f66-0cf25b33028d name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.741524672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ec0edf7-a101-4c5f-9f66-0cf25b33028d name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.741882912Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08b452587905adc995d6be896ca09b4cdc8f7e2c8006c1dde8f840780234c73f,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716931957799469178,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226a0aa4c06bac0f992b982b30b88bf61cec228340067b5bdcf44890fa7c7909,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716931954043668132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38033e90e0d8625417b7232d5fc9711d210d72e3357eb81d5502488fd8e0d2c,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716931954010754734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df6e2b0ae6cae289b552e0b07bcf56cf39f7eea198d2ca44e9370b5edb0395b,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716931953999091903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bc6b0cdba4c5acc81d7fd9a4fb877957755ca949984c6fe374005306c9e3da,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716931953987479961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b435d362fd1970b9f1fca4268df3152c9a49524530b52b325893dcca65a19e,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_CREATED,CreatedAt:1716931952006438463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:c9943d50da1bc58e17ce66130e50e9d6741f72e20b97ac6b5c28cf86486623dd,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716931949497318043,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\
"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716931938586698177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.port
s: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716931937823461156,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pa
use-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716931937808886708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716931937725314411,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716931937585135604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ec0edf7-a101-4c5f-9f66-0cf25b33028d name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.744644577Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4dde6fec-687f-4db3-ac23-d2cba681c93b name=/runtime.v1.RuntimeService/Version
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.744758612Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4dde6fec-687f-4db3-ac23-d2cba681c93b name=/runtime.v1.RuntimeService/Version
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.746290696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d43f036c-fad4-4792-a38b-7f91d65eec7b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.746624180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931979746605654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d43f036c-fad4-4792-a38b-7f91d65eec7b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.747289045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed950136-5a52-4bce-b333-eb5d28268bbf name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.747336528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed950136-5a52-4bce-b333-eb5d28268bbf name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:32:59 pause-547166 crio[2469]: time="2024-05-28 21:32:59.747543850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08b452587905adc995d6be896ca09b4cdc8f7e2c8006c1dde8f840780234c73f,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716931957799469178,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226a0aa4c06bac0f992b982b30b88bf61cec228340067b5bdcf44890fa7c7909,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716931954043668132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38033e90e0d8625417b7232d5fc9711d210d72e3357eb81d5502488fd8e0d2c,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716931954010754734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df6e2b0ae6cae289b552e0b07bcf56cf39f7eea198d2ca44e9370b5edb0395b,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716931953999091903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bc6b0cdba4c5acc81d7fd9a4fb877957755ca949984c6fe374005306c9e3da,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716931953987479961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b435d362fd1970b9f1fca4268df3152c9a49524530b52b325893dcca65a19e,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_CREATED,CreatedAt:1716931952006438463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:c9943d50da1bc58e17ce66130e50e9d6741f72e20b97ac6b5c28cf86486623dd,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716931949497318043,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\
"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716931938586698177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.port
s: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716931937823461156,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pa
use-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716931937808886708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716931937725314411,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716931937585135604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed950136-5a52-4bce-b333-eb5d28268bbf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	08b452587905a       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   22 seconds ago      Running             kube-proxy                3                   55924196ca739       kube-proxy-94v5m
	226a0aa4c06ba       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   26 seconds ago      Running             etcd                      2                   e05247b016b1e       etcd-pause-547166
	b38033e90e0d8       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   26 seconds ago      Running             kube-apiserver            2                   f74361ca9d7e5       kube-apiserver-pause-547166
	0df6e2b0ae6ca       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   26 seconds ago      Running             kube-controller-manager   2                   8eb72e8543586       kube-controller-manager-pause-547166
	c9bc6b0cdba4c       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   26 seconds ago      Running             kube-scheduler            2                   aaeebdebe83d5       kube-scheduler-pause-547166
	32b435d362fd1       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   28 seconds ago      Created             kube-proxy                2                   55924196ca739       kube-proxy-94v5m
	c9943d50da1bc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   30 seconds ago      Running             coredns                   2                   474d094735614       coredns-7db6d8ff4d-7rb9n
	3e2fdee3b8477       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   41 seconds ago      Exited              coredns                   1                   474d094735614       coredns-7db6d8ff4d-7rb9n
	2a31c9d041660       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   42 seconds ago      Exited              kube-scheduler            1                   aaeebdebe83d5       kube-scheduler-pause-547166
	389d655064d2f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   42 seconds ago      Exited              etcd                      1                   e05247b016b1e       etcd-pause-547166
	1f249f829d4e7       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   42 seconds ago      Exited              kube-apiserver            1                   f74361ca9d7e5       kube-apiserver-pause-547166
	6fd4e65044b3a       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   42 seconds ago      Exited              kube-controller-manager   1                   8eb72e8543586       kube-controller-manager-pause-547166
	
	
	==> coredns [3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98] <==
	
	
	==> coredns [c9943d50da1bc58e17ce66130e50e9d6741f72e20b97ac6b5c28cf86486623dd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55880 - 60924 "HINFO IN 8492936006546547809.5912867834691675911. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015392803s
	
	
	==> describe nodes <==
	Name:               pause-547166
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-547166
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=pause-547166
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T21_31_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:31:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-547166
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:32:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:32:37 +0000   Tue, 28 May 2024 21:30:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:32:37 +0000   Tue, 28 May 2024 21:30:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:32:37 +0000   Tue, 28 May 2024 21:30:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:32:37 +0000   Tue, 28 May 2024 21:31:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.108
	  Hostname:    pause-547166
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a077d34a1a8412d86474f83483a7b3c
	  System UUID:                2a077d34-a1a8-412d-8647-4f83483a7b3c
	  Boot ID:                    1a8c1b6a-3f36-4042-b21b-7034fbfc2291
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-7rb9n                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     102s
	  kube-system                 etcd-pause-547166                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         116s
	  kube-system                 kube-apiserver-pause-547166             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-controller-manager-pause-547166    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-proxy-94v5m                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-scheduler-pause-547166             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 99s                kube-proxy       
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     116s               kubelet          Node pause-547166 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node pause-547166 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node pause-547166 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                116s               kubelet          Node pause-547166 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  116s               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           103s               node-controller  Node pause-547166 event: Registered Node pause-547166 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-547166 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-547166 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-547166 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11s                node-controller  Node pause-547166 event: Registered Node pause-547166 in Controller
	
	
	==> dmesg <==
	[  +0.054646] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063636] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.197812] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.153379] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.307143] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.443861] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.057667] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.087769] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.065828] kauditd_printk_skb: 18 callbacks suppressed
	[May28 21:31] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.080490] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.222682] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.230516] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	[ +11.638076] kauditd_printk_skb: 84 callbacks suppressed
	[May28 21:32] systemd-fstab-generator[2388]: Ignoring "noauto" option for root device
	[  +0.151668] systemd-fstab-generator[2400]: Ignoring "noauto" option for root device
	[  +0.182486] systemd-fstab-generator[2414]: Ignoring "noauto" option for root device
	[  +0.149048] systemd-fstab-generator[2426]: Ignoring "noauto" option for root device
	[  +0.342061] systemd-fstab-generator[2454]: Ignoring "noauto" option for root device
	[  +6.944282] systemd-fstab-generator[2581]: Ignoring "noauto" option for root device
	[  +0.070807] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.343947] kauditd_printk_skb: 87 callbacks suppressed
	[ +10.764591] systemd-fstab-generator[3423]: Ignoring "noauto" option for root device
	[  +4.608832] kauditd_printk_skb: 51 callbacks suppressed
	[ +16.192933] systemd-fstab-generator[3822]: Ignoring "noauto" option for root device
	
	
	==> etcd [226a0aa4c06bac0f992b982b30b88bf61cec228340067b5bdcf44890fa7c7909] <==
	{"level":"info","ts":"2024-05-28T21:32:34.561165Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d5ce96a8bfe0f5c1","initial-advertise-peer-urls":["https://192.168.50.108:2380"],"listen-peer-urls":["https://192.168.50.108:2380"],"advertise-client-urls":["https://192.168.50.108:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.108:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T21:32:34.561251Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T21:32:34.560906Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.108:2380"}
	{"level":"info","ts":"2024-05-28T21:32:34.561344Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.108:2380"}
	{"level":"info","ts":"2024-05-28T21:32:35.691791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:35.691842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:35.691871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 received MsgPreVoteResp from d5ce96a8bfe0f5c1 at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:35.691902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became candidate at term 4"}
	{"level":"info","ts":"2024-05-28T21:32:35.69191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 received MsgVoteResp from d5ce96a8bfe0f5c1 at term 4"}
	{"level":"info","ts":"2024-05-28T21:32:35.691919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became leader at term 4"}
	{"level":"info","ts":"2024-05-28T21:32:35.691926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d5ce96a8bfe0f5c1 elected leader d5ce96a8bfe0f5c1 at term 4"}
	{"level":"info","ts":"2024-05-28T21:32:35.697177Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:32:35.697128Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d5ce96a8bfe0f5c1","local-member-attributes":"{Name:pause-547166 ClientURLs:[https://192.168.50.108:2379]}","request-path":"/0/members/d5ce96a8bfe0f5c1/attributes","cluster-id":"38e677d7bff02ecf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T21:32:35.697909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:32:35.698109Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T21:32:35.698121Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T21:32:35.699155Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.108:2379"}
	{"level":"info","ts":"2024-05-28T21:32:35.699914Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-05-28T21:32:58.764547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.796949ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17708593269401140572 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.108\" mod_revision:467 > success:<request_put:<key:\"/registry/masterleases/192.168.50.108\" value_size:67 lease:8485221232546364762 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.108\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-05-28T21:32:58.764654Z","caller":"traceutil/trace.go:171","msg":"trace[274044149] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"338.047944ms","start":"2024-05-28T21:32:58.42659Z","end":"2024-05-28T21:32:58.764638Z","steps":["trace[274044149] 'process raft request'  (duration: 125.676342ms)","trace[274044149] 'compare'  (duration: 211.663718ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T21:32:58.764766Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:32:58.426577Z","time spent":"338.108573ms","remote":"127.0.0.1:56670","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.108\" mod_revision:467 > success:<request_put:<key:\"/registry/masterleases/192.168.50.108\" value_size:67 lease:8485221232546364762 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.108\" > >"}
	{"level":"warn","ts":"2024-05-28T21:32:59.280613Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.324136ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:32:59.280759Z","caller":"traceutil/trace.go:171","msg":"trace[901839855] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:481; }","duration":"387.43297ms","start":"2024-05-28T21:32:58.893261Z","end":"2024-05-28T21:32:59.280694Z","steps":["trace[901839855] 'range keys from in-memory index tree'  (duration: 387.287022ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:32:59.280828Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.726104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:32:59.281086Z","caller":"traceutil/trace.go:171","msg":"trace[708541599] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:481; }","duration":"228.151657ms","start":"2024-05-28T21:32:59.052923Z","end":"2024-05-28T21:32:59.281075Z","steps":["trace[708541599] 'range keys from in-memory index tree'  (duration: 227.585228ms)"],"step_count":1}
	
	
	==> etcd [389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594] <==
	{"level":"info","ts":"2024-05-28T21:32:19.041775Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:32:20.360485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-28T21:32:20.360591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-28T21:32:20.360657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 received MsgPreVoteResp from d5ce96a8bfe0f5c1 at term 2"}
	{"level":"info","ts":"2024-05-28T21:32:20.360693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became candidate at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:20.360858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 received MsgVoteResp from d5ce96a8bfe0f5c1 at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:20.360903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became leader at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:20.360948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d5ce96a8bfe0f5c1 elected leader d5ce96a8bfe0f5c1 at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:20.363891Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d5ce96a8bfe0f5c1","local-member-attributes":"{Name:pause-547166 ClientURLs:[https://192.168.50.108:2379]}","request-path":"/0/members/d5ce96a8bfe0f5c1/attributes","cluster-id":"38e677d7bff02ecf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T21:32:20.363948Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:32:20.364046Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:32:20.364591Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T21:32:20.364658Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T21:32:20.367065Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-28T21:32:20.367621Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.108:2379"}
	{"level":"info","ts":"2024-05-28T21:32:21.781677Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-28T21:32:21.781837Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-547166","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.108:2380"],"advertise-client-urls":["https://192.168.50.108:2379"]}
	{"level":"warn","ts":"2024-05-28T21:32:21.781903Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T21:32:21.781982Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T21:32:21.808063Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.108:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T21:32:21.808119Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.108:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-28T21:32:21.808173Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d5ce96a8bfe0f5c1","current-leader-member-id":"d5ce96a8bfe0f5c1"}
	{"level":"info","ts":"2024-05-28T21:32:21.811454Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.108:2380"}
	{"level":"info","ts":"2024-05-28T21:32:21.81155Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.108:2380"}
	{"level":"info","ts":"2024-05-28T21:32:21.811575Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-547166","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.108:2380"],"advertise-client-urls":["https://192.168.50.108:2379"]}
	
	
	==> kernel <==
	 21:33:00 up 2 min,  0 users,  load average: 0.87, 0.39, 0.15
	Linux pause-547166 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de] <==
	W0528 21:32:31.228000       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.255819       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.264819       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.293524       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.336693       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.341514       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.361468       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.422256       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.426986       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.432001       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.443594       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.451545       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.470457       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.499697       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.535560       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.554368       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.601413       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.646511       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.653253       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.690851       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.690960       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.743805       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.794313       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.843787       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.855982       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b38033e90e0d8625417b7232d5fc9711d210d72e3357eb81d5502488fd8e0d2c] <==
	I0528 21:32:37.010779       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0528 21:32:37.044056       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0528 21:32:37.047260       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 21:32:37.047346       1 policy_source.go:224] refreshing policies
	I0528 21:32:37.078302       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0528 21:32:37.078413       1 shared_informer.go:320] Caches are synced for configmaps
	I0528 21:32:37.078500       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0528 21:32:37.080246       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0528 21:32:37.080311       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0528 21:32:37.080536       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0528 21:32:37.083553       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0528 21:32:37.099945       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0528 21:32:37.118559       1 cache.go:39] Caches are synced for autoregister controller
	I0528 21:32:37.896233       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0528 21:32:38.732122       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0528 21:32:38.749374       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0528 21:32:38.786105       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0528 21:32:38.815249       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0528 21:32:38.821315       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0528 21:32:49.744084       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0528 21:32:49.746205       1 controller.go:615] quota admission added evaluator for: endpoints
	I0528 21:32:58.765424       1 trace.go:236] Trace[1418498380]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.108,type:*v1.Endpoints,resource:apiServerIPInfo (28-May-2024 21:32:58.185) (total time: 579ms):
	Trace[1418498380]: ---"Transaction prepared" 239ms (21:32:58.426)
	Trace[1418498380]: ---"Txn call completed" 339ms (21:32:58.765)
	Trace[1418498380]: [579.701849ms] [579.701849ms] END
	
	
	==> kube-controller-manager [0df6e2b0ae6cae289b552e0b07bcf56cf39f7eea198d2ca44e9370b5edb0395b] <==
	I0528 21:32:49.760503       1 shared_informer.go:320] Caches are synced for expand
	I0528 21:32:49.764049       1 shared_informer.go:320] Caches are synced for job
	I0528 21:32:49.769265       1 shared_informer.go:320] Caches are synced for HPA
	I0528 21:32:49.772051       1 shared_informer.go:320] Caches are synced for persistent volume
	I0528 21:32:49.773568       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0528 21:32:49.783383       1 shared_informer.go:320] Caches are synced for stateful set
	I0528 21:32:49.783440       1 shared_informer.go:320] Caches are synced for PVC protection
	I0528 21:32:49.783830       1 shared_informer.go:320] Caches are synced for cronjob
	I0528 21:32:49.783954       1 shared_informer.go:320] Caches are synced for attach detach
	I0528 21:32:49.784131       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0528 21:32:49.786477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.296µs"
	I0528 21:32:49.802642       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0528 21:32:49.812466       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0528 21:32:49.832754       1 shared_informer.go:320] Caches are synced for daemon sets
	I0528 21:32:49.844190       1 shared_informer.go:320] Caches are synced for taint
	I0528 21:32:49.844366       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0528 21:32:49.844456       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-547166"
	I0528 21:32:49.844549       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0528 21:32:49.926835       1 shared_informer.go:320] Caches are synced for resource quota
	I0528 21:32:49.951816       1 shared_informer.go:320] Caches are synced for service account
	I0528 21:32:49.960100       1 shared_informer.go:320] Caches are synced for namespace
	I0528 21:32:49.984271       1 shared_informer.go:320] Caches are synced for resource quota
	I0528 21:32:50.408982       1 shared_informer.go:320] Caches are synced for garbage collector
	I0528 21:32:50.409034       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0528 21:32:50.418801       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d] <==
	I0528 21:32:19.250924       1 serving.go:380] Generated self-signed cert in-memory
	I0528 21:32:19.682548       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0528 21:32:19.682652       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:32:19.684282       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0528 21:32:19.684394       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0528 21:32:19.684404       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0528 21:32:19.684413       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [08b452587905adc995d6be896ca09b4cdc8f7e2c8006c1dde8f840780234c73f] <==
	I0528 21:32:37.904246       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:32:37.912085       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.108"]
	I0528 21:32:37.961953       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 21:32:37.962067       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 21:32:37.962082       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:32:37.968098       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:32:37.968350       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:32:37.968418       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:32:37.969578       1 config.go:192] "Starting service config controller"
	I0528 21:32:37.969625       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:32:37.969662       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:32:37.969678       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:32:37.970191       1 config.go:319] "Starting node config controller"
	I0528 21:32:37.970276       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:32:38.069812       1 shared_informer.go:320] Caches are synced for service config
	I0528 21:32:38.069916       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 21:32:38.071259       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [32b435d362fd1970b9f1fca4268df3152c9a49524530b52b325893dcca65a19e] <==
	
	
	==> kube-scheduler [2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19] <==
	I0528 21:32:19.545423       1 serving.go:380] Generated self-signed cert in-memory
	W0528 21:32:21.661358       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 21:32:21.661434       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:32:21.661462       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 21:32:21.661486       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 21:32:21.711231       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 21:32:21.711281       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:32:21.716025       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0528 21:32:21.716240       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0528 21:32:21.716466       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c9bc6b0cdba4c5acc81d7fd9a4fb877957755ca949984c6fe374005306c9e3da] <==
	I0528 21:32:35.350889       1 serving.go:380] Generated self-signed cert in-memory
	W0528 21:32:36.989566       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 21:32:36.991798       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:32:36.991865       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 21:32:36.991900       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 21:32:37.033296       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 21:32:37.033391       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:32:37.044562       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0528 21:32:37.047605       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0528 21:32:37.047672       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 21:32:37.047867       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 21:32:37.148517       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.692663    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b2509cd598b212d9d9a62337e8e8714-usr-share-ca-certificates\") pod \"kube-apiserver-pause-547166\" (UID: \"2b2509cd598b212d9d9a62337e8e8714\") " pod="kube-system/kube-apiserver-pause-547166"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.692677    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80a357c6b54cb6407e6733b556b49073-k8s-certs\") pod \"kube-controller-manager-pause-547166\" (UID: \"80a357c6b54cb6407e6733b556b49073\") " pod="kube-system/kube-controller-manager-pause-547166"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.692770    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80a357c6b54cb6407e6733b556b49073-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-547166\" (UID: \"80a357c6b54cb6407e6733b556b49073\") " pod="kube-system/kube-controller-manager-pause-547166"
	May 28 21:32:33 pause-547166 kubelet[3430]: E0528 21:32:33.694034    3430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-547166?timeout=10s\": dial tcp 192.168.50.108:8443: connect: connection refused" interval="400ms"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.791149    3430 kubelet_node_status.go:73] "Attempting to register node" node="pause-547166"
	May 28 21:32:33 pause-547166 kubelet[3430]: E0528 21:32:33.792251    3430 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.108:8443: connect: connection refused" node="pause-547166"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.962275    3430 scope.go:117] "RemoveContainer" containerID="6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.963822    3430 scope.go:117] "RemoveContainer" containerID="2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.964933    3430 scope.go:117] "RemoveContainer" containerID="389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.966511    3430 scope.go:117] "RemoveContainer" containerID="1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de"
	May 28 21:32:34 pause-547166 kubelet[3430]: E0528 21:32:34.095776    3430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-547166?timeout=10s\": dial tcp 192.168.50.108:8443: connect: connection refused" interval="800ms"
	May 28 21:32:34 pause-547166 kubelet[3430]: I0528 21:32:34.196507    3430 kubelet_node_status.go:73] "Attempting to register node" node="pause-547166"
	May 28 21:32:34 pause-547166 kubelet[3430]: E0528 21:32:34.197620    3430 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.108:8443: connect: connection refused" node="pause-547166"
	May 28 21:32:34 pause-547166 kubelet[3430]: I0528 21:32:34.999572    3430 kubelet_node_status.go:73] "Attempting to register node" node="pause-547166"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.129004    3430 kubelet_node_status.go:112] "Node was previously registered" node="pause-547166"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.129497    3430 kubelet_node_status.go:76] "Successfully registered node" node="pause-547166"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.131952    3430 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.133238    3430 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.475690    3430 apiserver.go:52] "Watching apiserver"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.478637    3430 topology_manager.go:215] "Topology Admit Handler" podUID="4e37fe79-cc67-4012-93b6-79ecc1f88ec7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7rb9n"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.478832    3430 topology_manager.go:215] "Topology Admit Handler" podUID="b8bf4bf8-52a8-4277-a373-bbeef065c3f5" podNamespace="kube-system" podName="kube-proxy-94v5m"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.489059    3430 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.492170    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8bf4bf8-52a8-4277-a373-bbeef065c3f5-xtables-lock\") pod \"kube-proxy-94v5m\" (UID: \"b8bf4bf8-52a8-4277-a373-bbeef065c3f5\") " pod="kube-system/kube-proxy-94v5m"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.492320    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8bf4bf8-52a8-4277-a373-bbeef065c3f5-lib-modules\") pod \"kube-proxy-94v5m\" (UID: \"b8bf4bf8-52a8-4277-a373-bbeef065c3f5\") " pod="kube-system/kube-proxy-94v5m"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.779876    3430 scope.go:117] "RemoveContainer" containerID="32b435d362fd1970b9f1fca4268df3152c9a49524530b52b325893dcca65a19e"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-547166 -n pause-547166
helpers_test.go:261: (dbg) Run:  kubectl --context pause-547166 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-547166 -n pause-547166
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-547166 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-547166 logs -n 25: (1.735216898s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                    |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-110727 sudo cat                  | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo                      | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | cri-dockerd --version                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo                      | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | systemctl status containerd                |                           |         |         |                     |                     |
	|         | --all --full --no-pager                    |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-187083 sudo                | NoKubernetes-187083       | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | systemctl is-active --quiet                |                           |         |         |                     |                     |
	|         | service kubelet                            |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo                      | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | systemctl cat containerd                   |                           |         |         |                     |                     |
	|         | --no-pager                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo cat                  | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | /lib/systemd/system/containerd.service     |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo cat                  | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | /etc/containerd/config.toml                |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo                      | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | containerd config dump                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo                      | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | systemctl status crio --all                |                           |         |         |                     |                     |
	|         | --full --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo                      | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | systemctl cat crio --no-pager              |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo find                 | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | /etc/crio -type f -exec sh -c              |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                       |                           |         |         |                     |                     |
	| ssh     | -p cilium-110727 sudo crio                 | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | config                                     |                           |         |         |                     |                     |
	| delete  | -p cilium-110727                           | cilium-110727             | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:30 UTC |
	| stop    | -p NoKubernetes-187083                     | NoKubernetes-187083       | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:30 UTC |
	| start   | -p pause-547166 --memory=2048              | pause-547166              | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:32 UTC |
	|         | --install-addons=false                     |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-185653                  | running-upgrade-185653    | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:30 UTC |
	| start   | -p NoKubernetes-187083                     | NoKubernetes-187083       | jenkins | v1.33.1 | 28 May 24 21:30 UTC | 28 May 24 21:31 UTC |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-314578               | kubernetes-upgrade-314578 | jenkins | v1.33.1 | 28 May 24 21:30 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-187083 sudo                | NoKubernetes-187083       | jenkins | v1.33.1 | 28 May 24 21:31 UTC |                     |
	|         | systemctl is-active --quiet                |                           |         |         |                     |                     |
	|         | service kubelet                            |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-187083                     | NoKubernetes-187083       | jenkins | v1.33.1 | 28 May 24 21:31 UTC | 28 May 24 21:31 UTC |
	| start   | -p stopped-upgrade-742900                  | minikube                  | jenkins | v1.26.0 | 28 May 24 21:31 UTC | 28 May 24 21:32 UTC |
	|         | --memory=2200 --vm-driver=kvm2             |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                  |                           |         |         |                     |                     |
	| start   | -p pause-547166                            | pause-547166              | jenkins | v1.33.1 | 28 May 24 21:32 UTC | 28 May 24 21:32 UTC |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-742900 stop                | minikube                  | jenkins | v1.26.0 | 28 May 24 21:32 UTC | 28 May 24 21:32 UTC |
	| start   | -p stopped-upgrade-742900                  | stopped-upgrade-742900    | jenkins | v1.33.1 | 28 May 24 21:32 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| start   | -p cert-expiration-257793                  | cert-expiration-257793    | jenkins | v1.33.1 | 28 May 24 21:32 UTC |                     |
	|         | --memory=2048                              |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:32:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:32:34.242584   53940 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:32:34.242686   53940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:32:34.242690   53940 out.go:304] Setting ErrFile to fd 2...
	I0528 21:32:34.242693   53940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:32:34.242866   53940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:32:34.243441   53940 out.go:298] Setting JSON to false
	I0528 21:32:34.244488   53940 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4497,"bootTime":1716927457,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:32:34.244538   53940 start.go:139] virtualization: kvm guest
	I0528 21:32:34.246530   53940 out.go:177] * [cert-expiration-257793] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:32:34.248125   53940 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:32:34.249237   53940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:32:34.248200   53940 notify.go:220] Checking for updates...
	I0528 21:32:34.251796   53940 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:32:34.253232   53940 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:32:34.254661   53940 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:32:34.256058   53940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:32:34.257991   53940 config.go:182] Loaded profile config "cert-expiration-257793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:32:34.258593   53940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:32:34.258644   53940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:32:34.275148   53940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34037
	I0528 21:32:34.275603   53940 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:32:34.276197   53940 main.go:141] libmachine: Using API Version  1
	I0528 21:32:34.276217   53940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:32:34.276597   53940 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:32:34.276800   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .DriverName
	I0528 21:32:34.277093   53940 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:32:34.277511   53940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:32:34.277550   53940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:32:34.292745   53940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I0528 21:32:34.293130   53940 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:32:34.293750   53940 main.go:141] libmachine: Using API Version  1
	I0528 21:32:34.293789   53940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:32:34.294172   53940 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:32:34.294371   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .DriverName
	I0528 21:32:34.331119   53940 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:32:34.332304   53940 start.go:297] selected driver: kvm2
	I0528 21:32:34.332310   53940 start.go:901] validating driver "kvm2" against &{Name:cert-expiration-257793 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:cert-expiration-257793 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:32:34.332441   53940 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:32:34.333095   53940 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:32:34.333158   53940 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:32:34.350190   53940 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:32:34.350527   53940 cni.go:84] Creating CNI manager for ""
	I0528 21:32:34.350535   53940 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:32:34.350580   53940 start.go:340] cluster config:
	{Name:cert-expiration-257793 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:cert-expiration-257793 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.246 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:32:34.350737   53940 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:32:34.353521   53940 out.go:177] * Starting "cert-expiration-257793" primary control-plane node in "cert-expiration-257793" cluster
	I0528 21:32:32.070685   53591 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98 4012e4ffbdbc2847150dbd792548b836bf108cd924bb66995eb87805550bd1ea 2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19 389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594 1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de 6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d 7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607 0b484f9e66876230b418bebec6d1b267cd656a2658a2615e7176040219b774af 71b1b8358fcf18a64c7f6fbec02a6d812406a3f6d7f18f132e87081d13151e99 cab49574f7e37cb652b7b1f0d5050ac849fb9e2487c9a9cb3d8a7b4f23b406e9 608a9704db445db40e7de956940dc50cb98f710ceb6bad213f03325df2b4a83a e2dd991b81384a9ac70bab8a7397ea9f62eae5c73ef7a37ecbef81571a527a87: (13.079502055s)
	W0528 21:32:32.070762   53591 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98 4012e4ffbdbc2847150dbd792548b836bf108cd924bb66995eb87805550bd1ea 2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19 389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594 1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de 6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d 7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607 0b484f9e66876230b418bebec6d1b267cd656a2658a2615e7176040219b774af 71b1b8358fcf18a64c7f6fbec02a6d812406a3f6d7f18f132e87081d13151e99 cab49574f7e37cb652b7b1f0d5050ac849fb9e2487c9a9cb3d8a7b4f23b406e9 608a9704db445db40e7de956940dc50cb98f710ceb6bad213f03325df2b4a83a e2dd991b81384a9ac70bab8a7397ea9f62eae5c73ef7a37ecbef81571a527a87: Process exited with status 1
	stdout:
	3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98
	4012e4ffbdbc2847150dbd792548b836bf108cd924bb66995eb87805550bd1ea
	2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19
	389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594
	1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de
	6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d
	
	stderr:
	E0528 21:32:32.058500    3164 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607\": container with ID starting with 7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607 not found: ID does not exist" containerID="7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607"
	time="2024-05-28T21:32:32Z" level=fatal msg="stopping the container \"7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607\": rpc error: code = NotFound desc = could not find container \"7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607\": container with ID starting with 7c72d9bfb98b10951abf0f7b030e90b97e7106bd756019131ac21b390dafc607 not found: ID does not exist"
	I0528 21:32:32.070816   53591 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 21:32:32.111727   53591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:32:32.122727   53591 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 May 28 21:30 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 May 28 21:30 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 May 28 21:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 May 28 21:30 /etc/kubernetes/scheduler.conf
	
	I0528 21:32:32.122783   53591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:32:32.132269   53591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:32:32.141481   53591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:32:32.151679   53591 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:32:32.151735   53591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:32:32.163796   53591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:32:32.174517   53591 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:32:32.174568   53591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:32:32.184310   53591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:32:32.194274   53591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:32:32.263540   53591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:32:33.090261   53591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:32:33.342804   53591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:32:33.422569   53591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:32:33.534409   53591 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:32:33.534496   53591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:32:34.035298   53591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:32:34.535002   53591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:32:34.553246   53591 api_server.go:72] duration metric: took 1.018835926s to wait for apiserver process to appear ...
	I0528 21:32:34.553281   53591 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:32:34.553303   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:31.540797   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .Start
	I0528 21:32:31.540947   53852 main.go:141] libmachine: (stopped-upgrade-742900) Ensuring networks are active...
	I0528 21:32:31.541794   53852 main.go:141] libmachine: (stopped-upgrade-742900) Ensuring network default is active
	I0528 21:32:31.542165   53852 main.go:141] libmachine: (stopped-upgrade-742900) Ensuring network mk-stopped-upgrade-742900 is active
	I0528 21:32:31.542537   53852 main.go:141] libmachine: (stopped-upgrade-742900) Getting domain xml...
	I0528 21:32:31.543124   53852 main.go:141] libmachine: (stopped-upgrade-742900) Creating domain...
	I0528 21:32:32.779953   53852 main.go:141] libmachine: (stopped-upgrade-742900) Waiting to get IP...
	I0528 21:32:32.780835   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:32.781260   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:32.781321   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:32.781228   53887 retry.go:31] will retry after 248.685661ms: waiting for machine to come up
	I0528 21:32:33.031975   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:33.032518   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:33.032547   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:33.032466   53887 retry.go:31] will retry after 241.510489ms: waiting for machine to come up
	I0528 21:32:33.275894   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:33.276373   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:33.276400   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:33.276300   53887 retry.go:31] will retry after 437.759362ms: waiting for machine to come up
	I0528 21:32:33.715917   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:33.716440   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:33.716465   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:33.716389   53887 retry.go:31] will retry after 385.209263ms: waiting for machine to come up
	I0528 21:32:34.103119   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:34.103520   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:34.103551   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:34.103475   53887 retry.go:31] will retry after 467.39146ms: waiting for machine to come up
	I0528 21:32:34.572088   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:34.572685   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:34.572716   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:34.572624   53887 retry.go:31] will retry after 768.631697ms: waiting for machine to come up
	I0528 21:32:35.342701   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:35.343237   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:35.343264   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:35.343199   53887 retry.go:31] will retry after 799.0965ms: waiting for machine to come up
	I0528 21:32:36.144236   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:36.144769   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:36.144816   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:36.144714   53887 retry.go:31] will retry after 1.244270656s: waiting for machine to come up
	I0528 21:32:36.923209   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:32:36.923243   53591 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:32:36.923261   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:36.988913   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:32:36.988938   53591 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:32:37.054146   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:37.067176   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:32:37.067201   53591 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:32:37.554133   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:37.559394   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:32:37.559415   53591 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:32:38.053589   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:38.059455   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:32:38.059479   53591 api_server.go:103] status: https://192.168.50.108:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:32:38.554091   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:38.558328   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0528 21:32:38.564767   53591 api_server.go:141] control plane version: v1.30.1
	I0528 21:32:38.564793   53591 api_server.go:131] duration metric: took 4.011504281s to wait for apiserver health ...
	I0528 21:32:38.564803   53591 cni.go:84] Creating CNI manager for ""
	I0528 21:32:38.564811   53591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:32:38.566498   53591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 21:32:34.354841   53940 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:32:34.354878   53940 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 21:32:34.354885   53940 cache.go:56] Caching tarball of preloaded images
	I0528 21:32:34.354995   53940 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:32:34.355005   53940 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 21:32:34.355137   53940 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/cert-expiration-257793/config.json ...
	I0528 21:32:34.355402   53940 start.go:360] acquireMachinesLock for cert-expiration-257793: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:32:38.567919   53591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 21:32:38.582752   53591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 21:32:38.600822   53591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:32:38.609489   53591 system_pods.go:59] 6 kube-system pods found
	I0528 21:32:38.609512   53591 system_pods.go:61] "coredns-7db6d8ff4d-7rb9n" [4e37fe79-cc67-4012-93b6-79ecc1f88ec7] Running
	I0528 21:32:38.609519   53591 system_pods.go:61] "etcd-pause-547166" [d9bfd727-090c-447f-8d1c-fb41302a4f99] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 21:32:38.609526   53591 system_pods.go:61] "kube-apiserver-pause-547166" [9bfb145c-9adf-4ba1-b909-e5d1fc40a080] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 21:32:38.609533   53591 system_pods.go:61] "kube-controller-manager-pause-547166" [605d47d8-50b2-4b0c-8ea3-0da1a7ce121a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 21:32:38.609540   53591 system_pods.go:61] "kube-proxy-94v5m" [b8bf4bf8-52a8-4277-a373-bbeef065c3f5] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 21:32:38.609550   53591 system_pods.go:61] "kube-scheduler-pause-547166" [262f8e41-4c82-4d7a-8f49-7be7a940bd96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 21:32:38.609560   53591 system_pods.go:74] duration metric: took 8.72029ms to wait for pod list to return data ...
	I0528 21:32:38.609570   53591 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:32:38.612535   53591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:32:38.612557   53591 node_conditions.go:123] node cpu capacity is 2
	I0528 21:32:38.612572   53591 node_conditions.go:105] duration metric: took 2.992642ms to run NodePressure ...
	I0528 21:32:38.612590   53591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:32:38.892573   53591 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0528 21:32:38.896612   53591 kubeadm.go:733] kubelet initialised
	I0528 21:32:38.896639   53591 kubeadm.go:734] duration metric: took 4.038221ms waiting for restarted kubelet to initialise ...
	I0528 21:32:38.896650   53591 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:32:38.900931   53591 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7rb9n" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:38.906523   53591 pod_ready.go:92] pod "coredns-7db6d8ff4d-7rb9n" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:38.906548   53591 pod_ready.go:81] duration metric: took 5.594734ms for pod "coredns-7db6d8ff4d-7rb9n" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:38.906561   53591 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:37.390639   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:37.391129   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:37.391157   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:37.391094   53887 retry.go:31] will retry after 1.203886087s: waiting for machine to come up
	I0528 21:32:38.596302   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:38.596841   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:38.596869   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:38.596793   53887 retry.go:31] will retry after 1.790511234s: waiting for machine to come up
	I0528 21:32:40.388647   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:40.389227   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:40.389266   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:40.389161   53887 retry.go:31] will retry after 2.288302933s: waiting for machine to come up
	I0528 21:32:40.913817   53591 pod_ready.go:102] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:42.914013   53591 pod_ready.go:102] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:45.412639   53591 pod_ready.go:102] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:42.678884   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:42.679427   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:42.679458   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:42.679372   53887 retry.go:31] will retry after 2.516621293s: waiting for machine to come up
	I0528 21:32:45.197084   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:45.197584   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | unable to find current IP address of domain stopped-upgrade-742900 in network mk-stopped-upgrade-742900
	I0528 21:32:45.197640   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | I0528 21:32:45.197545   53887 retry.go:31] will retry after 4.476953608s: waiting for machine to come up
	I0528 21:32:43.235439   52629 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:32:43.235648   52629 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:32:47.413533   53591 pod_ready.go:102] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:49.913943   53591 pod_ready.go:102] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:51.062411   53940 start.go:364] duration metric: took 16.706975401s to acquireMachinesLock for "cert-expiration-257793"
	I0528 21:32:51.062450   53940 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:32:51.062455   53940 fix.go:54] fixHost starting: 
	I0528 21:32:51.062849   53940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:32:51.062888   53940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:32:51.079853   53940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0528 21:32:51.080196   53940 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:32:51.080698   53940 main.go:141] libmachine: Using API Version  1
	I0528 21:32:51.080720   53940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:32:51.081018   53940 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:32:51.081180   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .DriverName
	I0528 21:32:51.081304   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetState
	I0528 21:32:51.082936   53940 fix.go:112] recreateIfNeeded on cert-expiration-257793: state=Running err=<nil>
	W0528 21:32:51.082951   53940 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:32:51.084493   53940 out.go:177] * Updating the running kvm2 "cert-expiration-257793" VM ...
	I0528 21:32:49.678422   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.678959   53852 main.go:141] libmachine: (stopped-upgrade-742900) Found IP for machine: 192.168.61.251
	I0528 21:32:49.678986   53852 main.go:141] libmachine: (stopped-upgrade-742900) Reserving static IP address...
	I0528 21:32:49.679019   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has current primary IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.679483   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "stopped-upgrade-742900", mac: "52:54:00:d5:2c:e6", ip: "192.168.61.251"} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:49.679515   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | skip adding static IP to network mk-stopped-upgrade-742900 - found existing host DHCP lease matching {name: "stopped-upgrade-742900", mac: "52:54:00:d5:2c:e6", ip: "192.168.61.251"}
	I0528 21:32:49.679542   53852 main.go:141] libmachine: (stopped-upgrade-742900) Reserved static IP address: 192.168.61.251
	I0528 21:32:49.679561   53852 main.go:141] libmachine: (stopped-upgrade-742900) Waiting for SSH to be available...
	I0528 21:32:49.679576   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | Getting to WaitForSSH function...
	I0528 21:32:49.681793   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.682251   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:49.682290   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.682369   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | Using SSH client type: external
	I0528 21:32:49.682399   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/stopped-upgrade-742900/id_rsa (-rw-------)
	I0528 21:32:49.682440   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/stopped-upgrade-742900/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 21:32:49.682459   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | About to run SSH command:
	I0528 21:32:49.682479   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | exit 0
	I0528 21:32:49.777634   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | SSH cmd err, output: <nil>: 
	I0528 21:32:49.778016   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetConfigRaw
	I0528 21:32:49.778668   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetIP
	I0528 21:32:49.781414   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.781911   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:49.781936   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.782184   53852 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/stopped-upgrade-742900/config.json ...
	I0528 21:32:49.782408   53852 machine.go:94] provisionDockerMachine start ...
	I0528 21:32:49.782433   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .DriverName
	I0528 21:32:49.782639   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:49.785397   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.785813   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:49.785834   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.785989   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:49.786160   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:49.786357   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:49.786531   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:49.786743   53852 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:49.786987   53852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I0528 21:32:49.787004   53852 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:32:49.914247   53852 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 21:32:49.914279   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetMachineName
	I0528 21:32:49.914516   53852 buildroot.go:166] provisioning hostname "stopped-upgrade-742900"
	I0528 21:32:49.914537   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetMachineName
	I0528 21:32:49.914749   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:49.916949   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.917301   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:49.917318   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:49.917537   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:49.917697   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:49.917869   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:49.918002   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:49.918166   53852 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:49.918323   53852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I0528 21:32:49.918333   53852 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-742900 && echo "stopped-upgrade-742900" | sudo tee /etc/hostname
	I0528 21:32:50.052046   53852 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-742900
	
	I0528 21:32:50.052076   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:50.054728   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.055139   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.055172   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.055343   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:50.055560   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.055805   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.055974   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:50.056136   53852 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:50.056311   53852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I0528 21:32:50.056327   53852 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-742900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-742900/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-742900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:32:50.185464   53852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:32:50.185493   53852 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:32:50.185544   53852 buildroot.go:174] setting up certificates
	I0528 21:32:50.185559   53852 provision.go:84] configureAuth start
	I0528 21:32:50.185575   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetMachineName
	I0528 21:32:50.185873   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetIP
	I0528 21:32:50.188803   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.189242   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.189280   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.189482   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:50.191777   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.192159   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.192194   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.192334   53852 provision.go:143] copyHostCerts
	I0528 21:32:50.192397   53852 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:32:50.192420   53852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:32:50.192502   53852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:32:50.192627   53852 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:32:50.192640   53852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:32:50.192679   53852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:32:50.192772   53852 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:32:50.192782   53852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:32:50.192814   53852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:32:50.192904   53852 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-742900 san=[127.0.0.1 192.168.61.251 localhost minikube stopped-upgrade-742900]
	I0528 21:32:50.352058   53852 provision.go:177] copyRemoteCerts
	I0528 21:32:50.352113   53852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:32:50.352137   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:50.354621   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.354940   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.354973   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.355129   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:50.355323   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.355500   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:50.355616   53852 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/stopped-upgrade-742900/id_rsa Username:docker}
	I0528 21:32:50.450765   53852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0528 21:32:50.470977   53852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 21:32:50.490550   53852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:32:50.510285   53852 provision.go:87] duration metric: took 324.711205ms to configureAuth
	I0528 21:32:50.510316   53852 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:32:50.510528   53852 config.go:182] Loaded profile config "stopped-upgrade-742900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0528 21:32:50.510615   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:50.513188   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.513543   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.513572   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.513838   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:50.514012   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.514174   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.514312   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:50.514502   53852 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:50.514676   53852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I0528 21:32:50.514696   53852 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:32:50.806228   53852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:32:50.806276   53852 machine.go:97] duration metric: took 1.023844629s to provisionDockerMachine
	I0528 21:32:50.806289   53852 start.go:293] postStartSetup for "stopped-upgrade-742900" (driver="kvm2")
	I0528 21:32:50.806303   53852 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:32:50.806332   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .DriverName
	I0528 21:32:50.806608   53852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:32:50.806649   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:50.809185   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.809535   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.809562   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.809737   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:50.809959   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.810169   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:50.810353   53852 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/stopped-upgrade-742900/id_rsa Username:docker}
	I0528 21:32:50.900822   53852 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:32:50.905507   53852 info.go:137] Remote host: Buildroot 2021.02.12
	I0528 21:32:50.905527   53852 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:32:50.905594   53852 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:32:50.905700   53852 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:32:50.905819   53852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:32:50.916302   53852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:32:50.936446   53852 start.go:296] duration metric: took 130.143804ms for postStartSetup
	I0528 21:32:50.936482   53852 fix.go:56] duration metric: took 19.414735564s for fixHost
	I0528 21:32:50.936500   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:50.939307   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.939636   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:50.939664   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:50.939860   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:50.940049   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.940224   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:50.940396   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:50.940604   53852 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:50.940777   53852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.61.251 22 <nil> <nil>}
	I0528 21:32:50.940788   53852 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 21:32:51.062245   53852 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716931971.017672263
	
	I0528 21:32:51.062268   53852 fix.go:216] guest clock: 1716931971.017672263
	I0528 21:32:51.062277   53852 fix.go:229] Guest: 2024-05-28 21:32:51.017672263 +0000 UTC Remote: 2024-05-28 21:32:50.936485219 +0000 UTC m=+19.553121433 (delta=81.187044ms)
	I0528 21:32:51.062328   53852 fix.go:200] guest clock delta is within tolerance: 81.187044ms
	I0528 21:32:51.062336   53852 start.go:83] releasing machines lock for "stopped-upgrade-742900", held for 19.540609295s
	I0528 21:32:51.062368   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .DriverName
	I0528 21:32:51.062642   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetIP
	I0528 21:32:51.065448   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:51.065853   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:51.065881   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:51.066074   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .DriverName
	I0528 21:32:51.066595   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .DriverName
	I0528 21:32:51.066760   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .DriverName
	I0528 21:32:51.066830   53852 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:32:51.066863   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:51.067005   53852 ssh_runner.go:195] Run: cat /version.json
	I0528 21:32:51.067035   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHHostname
	I0528 21:32:51.069844   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:51.069934   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:51.070310   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:51.070340   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:51.070366   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:51.070405   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:51.070549   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:51.070665   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHPort
	I0528 21:32:51.070752   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:51.070823   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHKeyPath
	I0528 21:32:51.070897   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:51.070964   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetSSHUsername
	I0528 21:32:51.071048   53852 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/stopped-upgrade-742900/id_rsa Username:docker}
	I0528 21:32:51.071086   53852 sshutil.go:53] new ssh client: &{IP:192.168.61.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/stopped-upgrade-742900/id_rsa Username:docker}
	W0528 21:32:51.184101   53852 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0528 21:32:51.184192   53852 ssh_runner.go:195] Run: systemctl --version
	I0528 21:32:51.191060   53852 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:32:51.334557   53852 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 21:32:51.342304   53852 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:32:51.342371   53852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:32:51.355983   53852 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 21:32:51.356003   53852 start.go:494] detecting cgroup driver to use...
	I0528 21:32:51.356060   53852 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:32:51.372760   53852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:32:51.386157   53852 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:32:51.386221   53852 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:32:51.399351   53852 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:32:51.411327   53852 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:32:51.520656   53852 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:32:51.652706   53852 docker.go:233] disabling docker service ...
	I0528 21:32:51.652779   53852 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:32:51.665448   53852 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:32:51.676694   53852 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:32:51.790493   53852 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:32:51.898846   53852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:32:51.911016   53852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:32:51.929233   53852 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0528 21:32:51.929297   53852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:51.939059   53852 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:32:51.939114   53852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:51.948804   53852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:51.957789   53852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:51.966712   53852 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:32:51.974992   53852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:51.983233   53852 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:51.998184   53852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:32:52.006463   53852 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:32:52.013649   53852 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 21:32:52.013699   53852 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 21:32:52.024344   53852 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:32:52.032611   53852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:32:52.144412   53852 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:32:52.266180   53852 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:32:52.266264   53852 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:32:52.271363   53852 start.go:562] Will wait 60s for crictl version
	I0528 21:32:52.271423   53852 ssh_runner.go:195] Run: which crictl
	I0528 21:32:52.275043   53852 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:32:52.312183   53852 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I0528 21:32:52.312288   53852 ssh_runner.go:195] Run: crio --version
	I0528 21:32:52.346374   53852 ssh_runner.go:195] Run: crio --version
	I0528 21:32:52.384424   53852 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	I0528 21:32:51.413978   53591 pod_ready.go:92] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:51.413999   53591 pod_ready.go:81] duration metric: took 12.507429981s for pod "etcd-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:51.414009   53591 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.424142   53591 pod_ready.go:102] pod "kube-apiserver-pause-547166" in "kube-system" namespace has status "Ready":"False"
	I0528 21:32:53.922278   53591 pod_ready.go:92] pod "kube-apiserver-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:53.922309   53591 pod_ready.go:81] duration metric: took 2.508292133s for pod "kube-apiserver-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.922322   53591 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.928408   53591 pod_ready.go:92] pod "kube-controller-manager-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:53.928432   53591 pod_ready.go:81] duration metric: took 6.100929ms for pod "kube-controller-manager-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.928444   53591 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-94v5m" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.934693   53591 pod_ready.go:92] pod "kube-proxy-94v5m" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:53.934720   53591 pod_ready.go:81] duration metric: took 6.267751ms for pod "kube-proxy-94v5m" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.934733   53591 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.940475   53591 pod_ready.go:92] pod "kube-scheduler-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:53.940500   53591 pod_ready.go:81] duration metric: took 5.757636ms for pod "kube-scheduler-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:53.940510   53591 pod_ready.go:38] duration metric: took 15.043850969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:32:53.940530   53591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 21:32:53.958826   53591 ops.go:34] apiserver oom_adj: -16
	I0528 21:32:53.958851   53591 kubeadm.go:591] duration metric: took 35.087407147s to restartPrimaryControlPlane
	I0528 21:32:53.958863   53591 kubeadm.go:393] duration metric: took 35.30663683s to StartCluster
	I0528 21:32:53.958885   53591 settings.go:142] acquiring lock: {Name:mkc1182212d780ab38539839372c25eb989b2e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:32:53.958960   53591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:32:53.959804   53591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:32:53.960053   53591 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.108 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 21:32:53.964106   53591 out.go:177] * Verifying Kubernetes components...
	I0528 21:32:53.960160   53591 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 21:32:53.960417   53591 config.go:182] Loaded profile config "pause-547166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:32:53.965624   53591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:32:53.966867   53591 out.go:177] * Enabled addons: 
	I0528 21:32:51.085630   53940 machine.go:94] provisionDockerMachine start ...
	I0528 21:32:51.085645   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .DriverName
	I0528 21:32:51.085818   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHHostname
	I0528 21:32:51.088161   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.088591   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.088622   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.088790   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHPort
	I0528 21:32:51.088943   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.089117   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.089308   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHUsername
	I0528 21:32:51.089489   53940 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:51.089659   53940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.246 22 <nil> <nil>}
	I0528 21:32:51.089664   53940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:32:51.211551   53940 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-257793
	
	I0528 21:32:51.211568   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetMachineName
	I0528 21:32:51.211868   53940 buildroot.go:166] provisioning hostname "cert-expiration-257793"
	I0528 21:32:51.211888   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetMachineName
	I0528 21:32:51.212115   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHHostname
	I0528 21:32:51.215082   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.215583   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.215609   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.215679   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHPort
	I0528 21:32:51.215864   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.216005   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.216136   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHUsername
	I0528 21:32:51.216343   53940 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:51.216550   53940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.246 22 <nil> <nil>}
	I0528 21:32:51.216558   53940 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-257793 && echo "cert-expiration-257793" | sudo tee /etc/hostname
	I0528 21:32:51.353068   53940 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-257793
	
	I0528 21:32:51.353082   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHHostname
	I0528 21:32:51.355955   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.356338   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.356374   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.356601   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHPort
	I0528 21:32:51.356787   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.356972   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.357137   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHUsername
	I0528 21:32:51.357403   53940 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:51.357609   53940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.246 22 <nil> <nil>}
	I0528 21:32:51.357626   53940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-257793' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-257793/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-257793' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:32:51.474959   53940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:32:51.474976   53940 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:32:51.474993   53940 buildroot.go:174] setting up certificates
	I0528 21:32:51.475002   53940 provision.go:84] configureAuth start
	I0528 21:32:51.475039   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetMachineName
	I0528 21:32:51.475335   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetIP
	I0528 21:32:51.478216   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.478688   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.478712   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.478899   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHHostname
	I0528 21:32:51.481380   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.481784   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.481810   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.481905   53940 provision.go:143] copyHostCerts
	I0528 21:32:51.481963   53940 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:32:51.481972   53940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:32:51.482020   53940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:32:51.482166   53940 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:32:51.482170   53940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:32:51.482191   53940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:32:51.482252   53940 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:32:51.482257   53940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:32:51.482296   53940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:32:51.482362   53940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-257793 san=[127.0.0.1 192.168.72.246 cert-expiration-257793 localhost minikube]
	I0528 21:32:51.730091   53940 provision.go:177] copyRemoteCerts
	I0528 21:32:51.730135   53940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:32:51.730155   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHHostname
	I0528 21:32:51.733028   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.733385   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.733414   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.733599   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHPort
	I0528 21:32:51.733856   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.734054   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHUsername
	I0528 21:32:51.734205   53940 sshutil.go:53] new ssh client: &{IP:192.168.72.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/cert-expiration-257793/id_rsa Username:docker}
	I0528 21:32:51.828230   53940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:32:51.857890   53940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0528 21:32:51.883847   53940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 21:32:51.910068   53940 provision.go:87] duration metric: took 435.055759ms to configureAuth
	I0528 21:32:51.910087   53940 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:32:51.910313   53940 config.go:182] Loaded profile config "cert-expiration-257793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:32:51.910392   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHHostname
	I0528 21:32:51.913668   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.914152   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:2c:e1", ip: ""} in network mk-cert-expiration-257793: {Iface:virbr4 ExpiryTime:2024-05-28 22:29:04 +0000 UTC Type:0 Mac:52:54:00:a9:2c:e1 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:cert-expiration-257793 Clientid:01:52:54:00:a9:2c:e1}
	I0528 21:32:51.914172   53940 main.go:141] libmachine: (cert-expiration-257793) DBG | domain cert-expiration-257793 has defined IP address 192.168.72.246 and MAC address 52:54:00:a9:2c:e1 in network mk-cert-expiration-257793
	I0528 21:32:51.914516   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHPort
	I0528 21:32:51.914765   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.914915   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHKeyPath
	I0528 21:32:51.915085   53940 main.go:141] libmachine: (cert-expiration-257793) Calling .GetSSHUsername
	I0528 21:32:51.915279   53940 main.go:141] libmachine: Using SSH client type: native
	I0528 21:32:51.915472   53940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.246 22 <nil> <nil>}
	I0528 21:32:51.915482   53940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:32:53.968094   53591 addons.go:510] duration metric: took 7.938072ms for enable addons: enabled=[]
	I0528 21:32:54.157343   53591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:32:54.177591   53591 node_ready.go:35] waiting up to 6m0s for node "pause-547166" to be "Ready" ...
	I0528 21:32:54.180983   53591 node_ready.go:49] node "pause-547166" has status "Ready":"True"
	I0528 21:32:54.181011   53591 node_ready.go:38] duration metric: took 3.378007ms for node "pause-547166" to be "Ready" ...
	I0528 21:32:54.181019   53591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:32:54.186075   53591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7rb9n" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:54.318716   53591 pod_ready.go:92] pod "coredns-7db6d8ff4d-7rb9n" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:54.318740   53591 pod_ready.go:81] duration metric: took 132.642486ms for pod "coredns-7db6d8ff4d-7rb9n" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:54.318749   53591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:54.718452   53591 pod_ready.go:92] pod "etcd-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:54.718475   53591 pod_ready.go:81] duration metric: took 399.720706ms for pod "etcd-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:54.718483   53591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:55.118676   53591 pod_ready.go:92] pod "kube-apiserver-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:55.118700   53591 pod_ready.go:81] duration metric: took 400.211491ms for pod "kube-apiserver-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:55.118709   53591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:55.518015   53591 pod_ready.go:92] pod "kube-controller-manager-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:55.518059   53591 pod_ready.go:81] duration metric: took 399.342005ms for pod "kube-controller-manager-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:55.518082   53591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-94v5m" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:52.385621   53852 main.go:141] libmachine: (stopped-upgrade-742900) Calling .GetIP
	I0528 21:32:52.388183   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:52.388481   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:2c:e6", ip: ""} in network mk-stopped-upgrade-742900: {Iface:virbr3 ExpiryTime:2024-05-28 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d5:2c:e6 Iaid: IPaddr:192.168.61.251 Prefix:24 Hostname:stopped-upgrade-742900 Clientid:01:52:54:00:d5:2c:e6}
	I0528 21:32:52.388527   53852 main.go:141] libmachine: (stopped-upgrade-742900) DBG | domain stopped-upgrade-742900 has defined IP address 192.168.61.251 and MAC address 52:54:00:d5:2c:e6 in network mk-stopped-upgrade-742900
	I0528 21:32:52.388735   53852 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0528 21:32:52.392824   53852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:32:52.404098   53852 kubeadm.go:877] updating cluster {Name:stopped-upgrade-742900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stop
ped-upgrade-742900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.251 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0528 21:32:52.404213   53852 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0528 21:32:52.404283   53852 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:32:52.445101   53852 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0528 21:32:52.445178   53852 ssh_runner.go:195] Run: which lz4
	I0528 21:32:52.449063   53852 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 21:32:52.453241   53852 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 21:32:52.453269   53852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	I0528 21:32:54.072247   53852 crio.go:462] duration metric: took 1.623205879s to copy over tarball
	I0528 21:32:54.072333   53852 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 21:32:55.918489   53591 pod_ready.go:92] pod "kube-proxy-94v5m" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:55.918524   53591 pod_ready.go:81] duration metric: took 400.434487ms for pod "kube-proxy-94v5m" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:55.918538   53591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:56.318565   53591 pod_ready.go:92] pod "kube-scheduler-pause-547166" in "kube-system" namespace has status "Ready":"True"
	I0528 21:32:56.318596   53591 pod_ready.go:81] duration metric: took 400.049782ms for pod "kube-scheduler-pause-547166" in "kube-system" namespace to be "Ready" ...
	I0528 21:32:56.318607   53591 pod_ready.go:38] duration metric: took 2.137578666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:32:56.318625   53591 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:32:56.318691   53591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:32:56.332483   53591 api_server.go:72] duration metric: took 2.372396031s to wait for apiserver process to appear ...
	I0528 21:32:56.332512   53591 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:32:56.332532   53591 api_server.go:253] Checking apiserver healthz at https://192.168.50.108:8443/healthz ...
	I0528 21:32:56.336846   53591 api_server.go:279] https://192.168.50.108:8443/healthz returned 200:
	ok
	I0528 21:32:56.337934   53591 api_server.go:141] control plane version: v1.30.1
	I0528 21:32:56.337968   53591 api_server.go:131] duration metric: took 5.44728ms to wait for apiserver health ...
	I0528 21:32:56.337978   53591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:32:56.520524   53591 system_pods.go:59] 6 kube-system pods found
	I0528 21:32:56.520562   53591 system_pods.go:61] "coredns-7db6d8ff4d-7rb9n" [4e37fe79-cc67-4012-93b6-79ecc1f88ec7] Running
	I0528 21:32:56.520569   53591 system_pods.go:61] "etcd-pause-547166" [d9bfd727-090c-447f-8d1c-fb41302a4f99] Running
	I0528 21:32:56.520574   53591 system_pods.go:61] "kube-apiserver-pause-547166" [9bfb145c-9adf-4ba1-b909-e5d1fc40a080] Running
	I0528 21:32:56.520579   53591 system_pods.go:61] "kube-controller-manager-pause-547166" [605d47d8-50b2-4b0c-8ea3-0da1a7ce121a] Running
	I0528 21:32:56.520584   53591 system_pods.go:61] "kube-proxy-94v5m" [b8bf4bf8-52a8-4277-a373-bbeef065c3f5] Running
	I0528 21:32:56.520589   53591 system_pods.go:61] "kube-scheduler-pause-547166" [262f8e41-4c82-4d7a-8f49-7be7a940bd96] Running
	I0528 21:32:56.520596   53591 system_pods.go:74] duration metric: took 182.608276ms to wait for pod list to return data ...
	I0528 21:32:56.520620   53591 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:32:56.718634   53591 default_sa.go:45] found service account: "default"
	I0528 21:32:56.718658   53591 default_sa.go:55] duration metric: took 198.026927ms for default service account to be created ...
	I0528 21:32:56.718666   53591 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:32:56.921427   53591 system_pods.go:86] 6 kube-system pods found
	I0528 21:32:56.921461   53591 system_pods.go:89] "coredns-7db6d8ff4d-7rb9n" [4e37fe79-cc67-4012-93b6-79ecc1f88ec7] Running
	I0528 21:32:56.921469   53591 system_pods.go:89] "etcd-pause-547166" [d9bfd727-090c-447f-8d1c-fb41302a4f99] Running
	I0528 21:32:56.921476   53591 system_pods.go:89] "kube-apiserver-pause-547166" [9bfb145c-9adf-4ba1-b909-e5d1fc40a080] Running
	I0528 21:32:56.921482   53591 system_pods.go:89] "kube-controller-manager-pause-547166" [605d47d8-50b2-4b0c-8ea3-0da1a7ce121a] Running
	I0528 21:32:56.921488   53591 system_pods.go:89] "kube-proxy-94v5m" [b8bf4bf8-52a8-4277-a373-bbeef065c3f5] Running
	I0528 21:32:56.921499   53591 system_pods.go:89] "kube-scheduler-pause-547166" [262f8e41-4c82-4d7a-8f49-7be7a940bd96] Running
	I0528 21:32:56.921509   53591 system_pods.go:126] duration metric: took 202.837672ms to wait for k8s-apps to be running ...
	I0528 21:32:56.921523   53591 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:32:56.921583   53591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:32:56.944951   53591 system_svc.go:56] duration metric: took 23.421521ms WaitForService to wait for kubelet
	I0528 21:32:56.944982   53591 kubeadm.go:576] duration metric: took 2.984898909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:32:56.945013   53591 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:32:57.119937   53591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:32:57.119969   53591 node_conditions.go:123] node cpu capacity is 2
	I0528 21:32:57.119996   53591 node_conditions.go:105] duration metric: took 174.977103ms to run NodePressure ...
	I0528 21:32:57.120011   53591 start.go:240] waiting for startup goroutines ...
	I0528 21:32:57.120024   53591 start.go:245] waiting for cluster config update ...
	I0528 21:32:57.120038   53591 start.go:254] writing updated cluster config ...
	I0528 21:32:57.120464   53591 ssh_runner.go:195] Run: rm -f paused
	I0528 21:32:57.177813   53591 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:32:57.180081   53591 out.go:177] * Done! kubectl is now configured to use "pause-547166" cluster and "default" namespace by default
	I0528 21:32:57.007404   53852 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.935045297s)
	I0528 21:32:57.007428   53852 crio.go:469] duration metric: took 2.935156035s to extract the tarball
	I0528 21:32:57.007435   53852 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 21:32:57.049521   53852 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:32:57.085322   53852 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0528 21:32:57.085356   53852 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0528 21:32:57.085455   53852 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:32:57.085478   53852 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0528 21:32:57.085494   53852 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0528 21:32:57.085483   53852 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0528 21:32:57.085551   53852 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0528 21:32:57.085681   53852 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0528 21:32:57.085683   53852 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0528 21:32:57.085744   53852 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0528 21:32:57.086969   53852 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0528 21:32:57.087016   53852 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0528 21:32:57.087003   53852 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0528 21:32:57.086969   53852 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0528 21:32:57.087105   53852 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0528 21:32:57.087175   53852 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0528 21:32:57.087395   53852 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0528 21:32:57.087747   53852 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:32:57.257379   53852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0528 21:32:57.260526   53852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0528 21:32:57.314151   53852 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0528 21:32:57.314200   53852 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0528 21:32:57.314240   53852 ssh_runner.go:195] Run: which crictl
	I0528 21:32:57.322104   53852 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18" in container runtime
	I0528 21:32:57.322149   53852 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0528 21:32:57.322202   53852 ssh_runner.go:195] Run: which crictl
	I0528 21:32:57.322292   53852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0528 21:32:57.346323   53852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0528 21:32:57.348215   53852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0528 21:32:57.350218   53852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0528 21:32:57.353477   53852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0528 21:32:57.356605   53852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0528 21:32:57.356707   53852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0528 21:32:57.356783   53852 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0528 21:32:57.360159   53852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0528 21:32:57.493052   53852 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d" in container runtime
	I0528 21:32:57.493090   53852 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0528 21:32:57.493120   53852 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237" in container runtime
	I0528 21:32:57.493129   53852 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0528 21:32:57.493141   53852 ssh_runner.go:195] Run: which crictl
	I0528 21:32:57.493150   53852 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0528 21:32:57.493159   53852 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0528 21:32:57.493192   53852 ssh_runner.go:195] Run: which crictl
	I0528 21:32:57.493194   53852 ssh_runner.go:195] Run: which crictl
	I0528 21:32:57.493779   53852 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0528 21:32:57.493808   53852 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0528 21:32:57.493836   53852 ssh_runner.go:195] Run: which crictl
	I0528 21:32:57.500103   53852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0528 21:32:57.500148   53852 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0528 21:32:57.500186   53852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0528 21:32:57.500217   53852 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693" in container runtime
	I0528 21:32:57.500239   53852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0528 21:32:57.500253   53852 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0528 21:32:57.500274   53852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.1
	I0528 21:32:57.500309   53852 ssh_runner.go:195] Run: which crictl
	I0528 21:32:57.500322   53852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0528 21:32:57.508874   53852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0528 21:32:57.556526   53852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.1
	I0528 21:32:57.602661   53852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0528 21:32:57.602755   53852 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0528 21:32:57.602668   53852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0528 21:32:57.602809   53852 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0528 21:32:57.647850   53852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0528 21:32:57.647941   53852 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0528 21:32:57.683898   53852 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.1
	I0528 21:32:57.683954   53852 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0528 21:32:57.683986   53852 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0528 21:32:57.683988   53852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0528 21:32:57.684029   53852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I0528 21:32:57.762089   53852 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0528 21:32:57.762145   53852 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0528 21:32:57.964741   53852 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:33:00.332492   53852 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.570325782s)
	I0528 21:33:00.332525   53852 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0528 21:33:00.332532   53852 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.367755035s)
	I0528 21:33:00.332550   53852 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0528 21:33:00.332613   53852 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0528 21:33:00.786315   53852 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0528 21:33:00.786372   53852 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0528 21:33:00.786434   53852 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	
	
	==> CRI-O <==
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.339212470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931982339180489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=faefb4e3-c3e4-4d40-8389-1151b62e7ad9 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.339951349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba95455c-851c-4ffe-bf81-1f9305dd651f name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.340062924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba95455c-851c-4ffe-bf81-1f9305dd651f name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.340753600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08b452587905adc995d6be896ca09b4cdc8f7e2c8006c1dde8f840780234c73f,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716931957799469178,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226a0aa4c06bac0f992b982b30b88bf61cec228340067b5bdcf44890fa7c7909,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716931954043668132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38033e90e0d8625417b7232d5fc9711d210d72e3357eb81d5502488fd8e0d2c,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716931954010754734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df6e2b0ae6cae289b552e0b07bcf56cf39f7eea198d2ca44e9370b5edb0395b,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716931953999091903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bc6b0cdba4c5acc81d7fd9a4fb877957755ca949984c6fe374005306c9e3da,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716931953987479961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b435d362fd1970b9f1fca4268df3152c9a49524530b52b325893dcca65a19e,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_CREATED,CreatedAt:1716931952006438463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:c9943d50da1bc58e17ce66130e50e9d6741f72e20b97ac6b5c28cf86486623dd,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716931949497318043,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\
"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716931938586698177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.port
s: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716931937823461156,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pa
use-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716931937808886708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716931937725314411,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716931937585135604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba95455c-851c-4ffe-bf81-1f9305dd651f name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.385484905Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79a89bdf-7451-4ff4-81b1-4f67ff88892c name=/runtime.v1.RuntimeService/Version
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.385553773Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79a89bdf-7451-4ff4-81b1-4f67ff88892c name=/runtime.v1.RuntimeService/Version
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.387054591Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1b8c9a5-26ea-4416-9c60-d25bb25361c5 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.387589145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931982387557568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1b8c9a5-26ea-4416-9c60-d25bb25361c5 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.388227025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1661f29b-96ce-4b84-ab9d-8559edac0fd5 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.388329448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1661f29b-96ce-4b84-ab9d-8559edac0fd5 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.388654174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08b452587905adc995d6be896ca09b4cdc8f7e2c8006c1dde8f840780234c73f,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716931957799469178,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226a0aa4c06bac0f992b982b30b88bf61cec228340067b5bdcf44890fa7c7909,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716931954043668132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38033e90e0d8625417b7232d5fc9711d210d72e3357eb81d5502488fd8e0d2c,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716931954010754734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df6e2b0ae6cae289b552e0b07bcf56cf39f7eea198d2ca44e9370b5edb0395b,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716931953999091903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bc6b0cdba4c5acc81d7fd9a4fb877957755ca949984c6fe374005306c9e3da,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716931953987479961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b435d362fd1970b9f1fca4268df3152c9a49524530b52b325893dcca65a19e,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_CREATED,CreatedAt:1716931952006438463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:c9943d50da1bc58e17ce66130e50e9d6741f72e20b97ac6b5c28cf86486623dd,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716931949497318043,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\
"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716931938586698177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.port
s: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716931937823461156,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pa
use-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716931937808886708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716931937725314411,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716931937585135604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1661f29b-96ce-4b84-ab9d-8559edac0fd5 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.454146726Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0d7df66-47e8-4cd4-bdf0-2a4772596c99 name=/runtime.v1.RuntimeService/Version
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.454296336Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0d7df66-47e8-4cd4-bdf0-2a4772596c99 name=/runtime.v1.RuntimeService/Version
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.457228792Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a548c117-a0ae-45a0-8552-cdff8bfc0399 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.457929698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931982457891513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a548c117-a0ae-45a0-8552-cdff8bfc0399 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.459296491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fe33f62-dc70-4e8b-9e86-604ee0eb2041 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.459398563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fe33f62-dc70-4e8b-9e86-604ee0eb2041 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.459827119Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08b452587905adc995d6be896ca09b4cdc8f7e2c8006c1dde8f840780234c73f,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716931957799469178,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226a0aa4c06bac0f992b982b30b88bf61cec228340067b5bdcf44890fa7c7909,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716931954043668132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38033e90e0d8625417b7232d5fc9711d210d72e3357eb81d5502488fd8e0d2c,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716931954010754734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df6e2b0ae6cae289b552e0b07bcf56cf39f7eea198d2ca44e9370b5edb0395b,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716931953999091903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bc6b0cdba4c5acc81d7fd9a4fb877957755ca949984c6fe374005306c9e3da,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716931953987479961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b435d362fd1970b9f1fca4268df3152c9a49524530b52b325893dcca65a19e,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_CREATED,CreatedAt:1716931952006438463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:c9943d50da1bc58e17ce66130e50e9d6741f72e20b97ac6b5c28cf86486623dd,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716931949497318043,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\
"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716931938586698177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.port
s: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716931937823461156,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pa
use-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716931937808886708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716931937725314411,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716931937585135604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6fe33f62-dc70-4e8b-9e86-604ee0eb2041 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.520605215Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e1d2546-cfac-4459-a3ac-74536c4a058e name=/runtime.v1.RuntimeService/Version
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.520782914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e1d2546-cfac-4459-a3ac-74536c4a058e name=/runtime.v1.RuntimeService/Version
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.521992552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33707c5a-dd05-42ec-989f-ba39722fd1cc name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.522374352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716931982522352660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33707c5a-dd05-42ec-989f-ba39722fd1cc name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.523000268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0cf9253d-a360-4eb5-a71c-3cebc067fcd9 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.523084451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0cf9253d-a360-4eb5-a71c-3cebc067fcd9 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:33:02 pause-547166 crio[2469]: time="2024-05-28 21:33:02.523337423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08b452587905adc995d6be896ca09b4cdc8f7e2c8006c1dde8f840780234c73f,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716931957799469178,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226a0aa4c06bac0f992b982b30b88bf61cec228340067b5bdcf44890fa7c7909,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716931954043668132,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b38033e90e0d8625417b7232d5fc9711d210d72e3357eb81d5502488fd8e0d2c,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716931954010754734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df6e2b0ae6cae289b552e0b07bcf56cf39f7eea198d2ca44e9370b5edb0395b,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716931953999091903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bc6b0cdba4c5acc81d7fd9a4fb877957755ca949984c6fe374005306c9e3da,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716931953987479961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32b435d362fd1970b9f1fca4268df3152c9a49524530b52b325893dcca65a19e,PodSandboxId:55924196ca739084a0b3080fb8ccb335229f0224c24f9ddbadeb22b8111af101,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_CREATED,CreatedAt:1716931952006438463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-94v5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8bf4bf8-52a8-4277-a373-bbeef065c3f5,},Annotations:map[string]string{io.kubernetes.container.hash: f685f60e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:c9943d50da1bc58e17ce66130e50e9d6741f72e20b97ac6b5c28cf86486623dd,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716931949497318043,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\
"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98,PodSandboxId:474d094735614e118cfbf97a8d06f1a3f530daa9379abe1e0a06ea7f2fd1c9f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1716931938586698177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7rb9n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e37fe79-cc67-4012-93b6-79ecc1f88ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 802984b6,io.kubernetes.container.port
s: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19,PodSandboxId:aaeebdebe83d5c8a4c38add635a029fd700568be5bfb5ea47b5bb3167cedcd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1716931937823461156,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pa
use-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb895cce6b7bf17c8ad4004a2ee11778,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594,PodSandboxId:e05247b016b1e7873ba28c88263fac15189e7852a7a983aa4e717d134f124944,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1716931937808886708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-547166,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 347a9de7fc20962e4a0a09ef87e54be4,},Annotations:map[string]string{io.kubernetes.container.hash: edaaa3ad,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de,PodSandboxId:f74361ca9d7e5fdcf09c3a46a9f6b465b50dc69accb28907c9c25d8a9844e1a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1716931937725314411,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 2b2509cd598b212d9d9a62337e8e8714,},Annotations:map[string]string{io.kubernetes.container.hash: 85c96444,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d,PodSandboxId:8eb72e854358643cdcd081df9f3475788139fbbc2e7962e1455177f9d9b4aa89,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1716931937585135604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-547166,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 80a357c6b54cb6407e6733b556b49073,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0cf9253d-a360-4eb5-a71c-3cebc067fcd9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	08b452587905a       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   24 seconds ago      Running             kube-proxy                3                   55924196ca739       kube-proxy-94v5m
	226a0aa4c06ba       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   28 seconds ago      Running             etcd                      2                   e05247b016b1e       etcd-pause-547166
	b38033e90e0d8       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   28 seconds ago      Running             kube-apiserver            2                   f74361ca9d7e5       kube-apiserver-pause-547166
	0df6e2b0ae6ca       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   28 seconds ago      Running             kube-controller-manager   2                   8eb72e8543586       kube-controller-manager-pause-547166
	c9bc6b0cdba4c       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   28 seconds ago      Running             kube-scheduler            2                   aaeebdebe83d5       kube-scheduler-pause-547166
	32b435d362fd1       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   30 seconds ago      Created             kube-proxy                2                   55924196ca739       kube-proxy-94v5m
	c9943d50da1bc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   33 seconds ago      Running             coredns                   2                   474d094735614       coredns-7db6d8ff4d-7rb9n
	3e2fdee3b8477       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   44 seconds ago      Exited              coredns                   1                   474d094735614       coredns-7db6d8ff4d-7rb9n
	2a31c9d041660       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   44 seconds ago      Exited              kube-scheduler            1                   aaeebdebe83d5       kube-scheduler-pause-547166
	389d655064d2f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   44 seconds ago      Exited              etcd                      1                   e05247b016b1e       etcd-pause-547166
	1f249f829d4e7       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   44 seconds ago      Exited              kube-apiserver            1                   f74361ca9d7e5       kube-apiserver-pause-547166
	6fd4e65044b3a       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   45 seconds ago      Exited              kube-controller-manager   1                   8eb72e8543586       kube-controller-manager-pause-547166
	
	
	==> coredns [3e2fdee3b8477fc1ea3db36c29f9c36519b2f5dfb905e78905d1b98b3beb3b98] <==
	
	
	==> coredns [c9943d50da1bc58e17ce66130e50e9d6741f72e20b97ac6b5c28cf86486623dd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55880 - 60924 "HINFO IN 8492936006546547809.5912867834691675911. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015392803s
	
	
	==> describe nodes <==
	Name:               pause-547166
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-547166
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=pause-547166
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T21_31_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:31:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-547166
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:32:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:32:37 +0000   Tue, 28 May 2024 21:30:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:32:37 +0000   Tue, 28 May 2024 21:30:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:32:37 +0000   Tue, 28 May 2024 21:30:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:32:37 +0000   Tue, 28 May 2024 21:31:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.108
	  Hostname:    pause-547166
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a077d34a1a8412d86474f83483a7b3c
	  System UUID:                2a077d34-a1a8-412d-8647-4f83483a7b3c
	  Boot ID:                    1a8c1b6a-3f36-4042-b21b-7034fbfc2291
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-7rb9n                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     104s
	  kube-system                 etcd-pause-547166                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         118s
	  kube-system                 kube-apiserver-pause-547166             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-controller-manager-pause-547166    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-proxy-94v5m                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-pause-547166             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     118s               kubelet          Node pause-547166 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node pause-547166 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node pause-547166 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                118s               kubelet          Node pause-547166 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  118s               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           105s               node-controller  Node pause-547166 event: Registered Node pause-547166 in Controller
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s (x8 over 29s)  kubelet          Node pause-547166 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 29s)  kubelet          Node pause-547166 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x7 over 29s)  kubelet          Node pause-547166 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13s                node-controller  Node pause-547166 event: Registered Node pause-547166 in Controller
	
	
	==> dmesg <==
	[  +0.054646] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063636] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.197812] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.153379] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.307143] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.443861] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.057667] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.087769] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.065828] kauditd_printk_skb: 18 callbacks suppressed
	[May28 21:31] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.080490] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.222682] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.230516] systemd-fstab-generator[1516]: Ignoring "noauto" option for root device
	[ +11.638076] kauditd_printk_skb: 84 callbacks suppressed
	[May28 21:32] systemd-fstab-generator[2388]: Ignoring "noauto" option for root device
	[  +0.151668] systemd-fstab-generator[2400]: Ignoring "noauto" option for root device
	[  +0.182486] systemd-fstab-generator[2414]: Ignoring "noauto" option for root device
	[  +0.149048] systemd-fstab-generator[2426]: Ignoring "noauto" option for root device
	[  +0.342061] systemd-fstab-generator[2454]: Ignoring "noauto" option for root device
	[  +6.944282] systemd-fstab-generator[2581]: Ignoring "noauto" option for root device
	[  +0.070807] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.343947] kauditd_printk_skb: 87 callbacks suppressed
	[ +10.764591] systemd-fstab-generator[3423]: Ignoring "noauto" option for root device
	[  +4.608832] kauditd_printk_skb: 51 callbacks suppressed
	[ +16.192933] systemd-fstab-generator[3822]: Ignoring "noauto" option for root device
	
	
	==> etcd [226a0aa4c06bac0f992b982b30b88bf61cec228340067b5bdcf44890fa7c7909] <==
	{"level":"info","ts":"2024-05-28T21:32:34.561165Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d5ce96a8bfe0f5c1","initial-advertise-peer-urls":["https://192.168.50.108:2380"],"listen-peer-urls":["https://192.168.50.108:2380"],"advertise-client-urls":["https://192.168.50.108:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.108:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T21:32:34.561251Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T21:32:34.560906Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.108:2380"}
	{"level":"info","ts":"2024-05-28T21:32:34.561344Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.108:2380"}
	{"level":"info","ts":"2024-05-28T21:32:35.691791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:35.691842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:35.691871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 received MsgPreVoteResp from d5ce96a8bfe0f5c1 at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:35.691902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became candidate at term 4"}
	{"level":"info","ts":"2024-05-28T21:32:35.69191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 received MsgVoteResp from d5ce96a8bfe0f5c1 at term 4"}
	{"level":"info","ts":"2024-05-28T21:32:35.691919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became leader at term 4"}
	{"level":"info","ts":"2024-05-28T21:32:35.691926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d5ce96a8bfe0f5c1 elected leader d5ce96a8bfe0f5c1 at term 4"}
	{"level":"info","ts":"2024-05-28T21:32:35.697177Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:32:35.697128Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d5ce96a8bfe0f5c1","local-member-attributes":"{Name:pause-547166 ClientURLs:[https://192.168.50.108:2379]}","request-path":"/0/members/d5ce96a8bfe0f5c1/attributes","cluster-id":"38e677d7bff02ecf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T21:32:35.697909Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:32:35.698109Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T21:32:35.698121Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T21:32:35.699155Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.108:2379"}
	{"level":"info","ts":"2024-05-28T21:32:35.699914Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-05-28T21:32:58.764547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.796949ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17708593269401140572 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.108\" mod_revision:467 > success:<request_put:<key:\"/registry/masterleases/192.168.50.108\" value_size:67 lease:8485221232546364762 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.108\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-05-28T21:32:58.764654Z","caller":"traceutil/trace.go:171","msg":"trace[274044149] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"338.047944ms","start":"2024-05-28T21:32:58.42659Z","end":"2024-05-28T21:32:58.764638Z","steps":["trace[274044149] 'process raft request'  (duration: 125.676342ms)","trace[274044149] 'compare'  (duration: 211.663718ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T21:32:58.764766Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:32:58.426577Z","time spent":"338.108573ms","remote":"127.0.0.1:56670","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.108\" mod_revision:467 > success:<request_put:<key:\"/registry/masterleases/192.168.50.108\" value_size:67 lease:8485221232546364762 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.108\" > >"}
	{"level":"warn","ts":"2024-05-28T21:32:59.280613Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.324136ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:32:59.280759Z","caller":"traceutil/trace.go:171","msg":"trace[901839855] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:481; }","duration":"387.43297ms","start":"2024-05-28T21:32:58.893261Z","end":"2024-05-28T21:32:59.280694Z","steps":["trace[901839855] 'range keys from in-memory index tree'  (duration: 387.287022ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:32:59.280828Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.726104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:32:59.281086Z","caller":"traceutil/trace.go:171","msg":"trace[708541599] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:481; }","duration":"228.151657ms","start":"2024-05-28T21:32:59.052923Z","end":"2024-05-28T21:32:59.281075Z","steps":["trace[708541599] 'range keys from in-memory index tree'  (duration: 227.585228ms)"],"step_count":1}
	
	
	==> etcd [389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594] <==
	{"level":"info","ts":"2024-05-28T21:32:19.041775Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:32:20.360485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-28T21:32:20.360591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-28T21:32:20.360657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 received MsgPreVoteResp from d5ce96a8bfe0f5c1 at term 2"}
	{"level":"info","ts":"2024-05-28T21:32:20.360693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became candidate at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:20.360858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 received MsgVoteResp from d5ce96a8bfe0f5c1 at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:20.360903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5ce96a8bfe0f5c1 became leader at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:20.360948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d5ce96a8bfe0f5c1 elected leader d5ce96a8bfe0f5c1 at term 3"}
	{"level":"info","ts":"2024-05-28T21:32:20.363891Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d5ce96a8bfe0f5c1","local-member-attributes":"{Name:pause-547166 ClientURLs:[https://192.168.50.108:2379]}","request-path":"/0/members/d5ce96a8bfe0f5c1/attributes","cluster-id":"38e677d7bff02ecf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T21:32:20.363948Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:32:20.364046Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:32:20.364591Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T21:32:20.364658Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T21:32:20.367065Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-28T21:32:20.367621Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.108:2379"}
	{"level":"info","ts":"2024-05-28T21:32:21.781677Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-28T21:32:21.781837Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-547166","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.108:2380"],"advertise-client-urls":["https://192.168.50.108:2379"]}
	{"level":"warn","ts":"2024-05-28T21:32:21.781903Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T21:32:21.781982Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T21:32:21.808063Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.108:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-28T21:32:21.808119Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.108:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-28T21:32:21.808173Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d5ce96a8bfe0f5c1","current-leader-member-id":"d5ce96a8bfe0f5c1"}
	{"level":"info","ts":"2024-05-28T21:32:21.811454Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.108:2380"}
	{"level":"info","ts":"2024-05-28T21:32:21.81155Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.108:2380"}
	{"level":"info","ts":"2024-05-28T21:32:21.811575Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-547166","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.108:2380"],"advertise-client-urls":["https://192.168.50.108:2379"]}
	
	
	==> kernel <==
	 21:33:02 up 2 min,  0 users,  load average: 0.87, 0.39, 0.15
	Linux pause-547166 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de] <==
	W0528 21:32:31.228000       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.255819       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.264819       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.293524       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.336693       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.341514       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.361468       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.422256       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.426986       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.432001       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.443594       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.451545       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.470457       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.499697       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.535560       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.554368       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.601413       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.646511       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.653253       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.690851       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.690960       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.743805       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.794313       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.843787       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0528 21:32:31.855982       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b38033e90e0d8625417b7232d5fc9711d210d72e3357eb81d5502488fd8e0d2c] <==
	I0528 21:32:37.010779       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0528 21:32:37.044056       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0528 21:32:37.047260       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 21:32:37.047346       1 policy_source.go:224] refreshing policies
	I0528 21:32:37.078302       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0528 21:32:37.078413       1 shared_informer.go:320] Caches are synced for configmaps
	I0528 21:32:37.078500       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0528 21:32:37.080246       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0528 21:32:37.080311       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0528 21:32:37.080536       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0528 21:32:37.083553       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0528 21:32:37.099945       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0528 21:32:37.118559       1 cache.go:39] Caches are synced for autoregister controller
	I0528 21:32:37.896233       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0528 21:32:38.732122       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0528 21:32:38.749374       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0528 21:32:38.786105       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0528 21:32:38.815249       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0528 21:32:38.821315       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0528 21:32:49.744084       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0528 21:32:49.746205       1 controller.go:615] quota admission added evaluator for: endpoints
	I0528 21:32:58.765424       1 trace.go:236] Trace[1418498380]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.108,type:*v1.Endpoints,resource:apiServerIPInfo (28-May-2024 21:32:58.185) (total time: 579ms):
	Trace[1418498380]: ---"Transaction prepared" 239ms (21:32:58.426)
	Trace[1418498380]: ---"Txn call completed" 339ms (21:32:58.765)
	Trace[1418498380]: [579.701849ms] [579.701849ms] END
	
	
	==> kube-controller-manager [0df6e2b0ae6cae289b552e0b07bcf56cf39f7eea198d2ca44e9370b5edb0395b] <==
	I0528 21:32:49.760503       1 shared_informer.go:320] Caches are synced for expand
	I0528 21:32:49.764049       1 shared_informer.go:320] Caches are synced for job
	I0528 21:32:49.769265       1 shared_informer.go:320] Caches are synced for HPA
	I0528 21:32:49.772051       1 shared_informer.go:320] Caches are synced for persistent volume
	I0528 21:32:49.773568       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0528 21:32:49.783383       1 shared_informer.go:320] Caches are synced for stateful set
	I0528 21:32:49.783440       1 shared_informer.go:320] Caches are synced for PVC protection
	I0528 21:32:49.783830       1 shared_informer.go:320] Caches are synced for cronjob
	I0528 21:32:49.783954       1 shared_informer.go:320] Caches are synced for attach detach
	I0528 21:32:49.784131       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0528 21:32:49.786477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.296µs"
	I0528 21:32:49.802642       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0528 21:32:49.812466       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0528 21:32:49.832754       1 shared_informer.go:320] Caches are synced for daemon sets
	I0528 21:32:49.844190       1 shared_informer.go:320] Caches are synced for taint
	I0528 21:32:49.844366       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0528 21:32:49.844456       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-547166"
	I0528 21:32:49.844549       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0528 21:32:49.926835       1 shared_informer.go:320] Caches are synced for resource quota
	I0528 21:32:49.951816       1 shared_informer.go:320] Caches are synced for service account
	I0528 21:32:49.960100       1 shared_informer.go:320] Caches are synced for namespace
	I0528 21:32:49.984271       1 shared_informer.go:320] Caches are synced for resource quota
	I0528 21:32:50.408982       1 shared_informer.go:320] Caches are synced for garbage collector
	I0528 21:32:50.409034       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0528 21:32:50.418801       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d] <==
	I0528 21:32:19.250924       1 serving.go:380] Generated self-signed cert in-memory
	I0528 21:32:19.682548       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0528 21:32:19.682652       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:32:19.684282       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0528 21:32:19.684394       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0528 21:32:19.684404       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0528 21:32:19.684413       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [08b452587905adc995d6be896ca09b4cdc8f7e2c8006c1dde8f840780234c73f] <==
	I0528 21:32:37.904246       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:32:37.912085       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.108"]
	I0528 21:32:37.961953       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 21:32:37.962067       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 21:32:37.962082       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:32:37.968098       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:32:37.968350       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:32:37.968418       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:32:37.969578       1 config.go:192] "Starting service config controller"
	I0528 21:32:37.969625       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:32:37.969662       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:32:37.969678       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:32:37.970191       1 config.go:319] "Starting node config controller"
	I0528 21:32:37.970276       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:32:38.069812       1 shared_informer.go:320] Caches are synced for service config
	I0528 21:32:38.069916       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 21:32:38.071259       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [32b435d362fd1970b9f1fca4268df3152c9a49524530b52b325893dcca65a19e] <==
	
	
	==> kube-scheduler [2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19] <==
	I0528 21:32:19.545423       1 serving.go:380] Generated self-signed cert in-memory
	W0528 21:32:21.661358       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 21:32:21.661434       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:32:21.661462       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 21:32:21.661486       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 21:32:21.711231       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 21:32:21.711281       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:32:21.716025       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0528 21:32:21.716240       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0528 21:32:21.716466       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c9bc6b0cdba4c5acc81d7fd9a4fb877957755ca949984c6fe374005306c9e3da] <==
	I0528 21:32:35.350889       1 serving.go:380] Generated self-signed cert in-memory
	W0528 21:32:36.989566       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 21:32:36.991798       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:32:36.991865       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 21:32:36.991900       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 21:32:37.033296       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 21:32:37.033391       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:32:37.044562       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0528 21:32:37.047605       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0528 21:32:37.047672       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 21:32:37.047867       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 21:32:37.148517       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.692663    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b2509cd598b212d9d9a62337e8e8714-usr-share-ca-certificates\") pod \"kube-apiserver-pause-547166\" (UID: \"2b2509cd598b212d9d9a62337e8e8714\") " pod="kube-system/kube-apiserver-pause-547166"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.692677    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80a357c6b54cb6407e6733b556b49073-k8s-certs\") pod \"kube-controller-manager-pause-547166\" (UID: \"80a357c6b54cb6407e6733b556b49073\") " pod="kube-system/kube-controller-manager-pause-547166"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.692770    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80a357c6b54cb6407e6733b556b49073-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-547166\" (UID: \"80a357c6b54cb6407e6733b556b49073\") " pod="kube-system/kube-controller-manager-pause-547166"
	May 28 21:32:33 pause-547166 kubelet[3430]: E0528 21:32:33.694034    3430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-547166?timeout=10s\": dial tcp 192.168.50.108:8443: connect: connection refused" interval="400ms"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.791149    3430 kubelet_node_status.go:73] "Attempting to register node" node="pause-547166"
	May 28 21:32:33 pause-547166 kubelet[3430]: E0528 21:32:33.792251    3430 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.108:8443: connect: connection refused" node="pause-547166"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.962275    3430 scope.go:117] "RemoveContainer" containerID="6fd4e65044b3a9dcd1f16966186410e1f690d18a4a5d286b7249fa824cb0213d"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.963822    3430 scope.go:117] "RemoveContainer" containerID="2a31c9d04166019c2a49c4e013e4b94aa69a5bb31cef09d61eb8df29082fac19"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.964933    3430 scope.go:117] "RemoveContainer" containerID="389d655064d2f72cd1d31d88f738025b8d89ca4b2a1b99ae56387c719cc80594"
	May 28 21:32:33 pause-547166 kubelet[3430]: I0528 21:32:33.966511    3430 scope.go:117] "RemoveContainer" containerID="1f249f829d4e7068aec569445d4895582618b4d27ac743a409d93faad930c9de"
	May 28 21:32:34 pause-547166 kubelet[3430]: E0528 21:32:34.095776    3430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-547166?timeout=10s\": dial tcp 192.168.50.108:8443: connect: connection refused" interval="800ms"
	May 28 21:32:34 pause-547166 kubelet[3430]: I0528 21:32:34.196507    3430 kubelet_node_status.go:73] "Attempting to register node" node="pause-547166"
	May 28 21:32:34 pause-547166 kubelet[3430]: E0528 21:32:34.197620    3430 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.108:8443: connect: connection refused" node="pause-547166"
	May 28 21:32:34 pause-547166 kubelet[3430]: I0528 21:32:34.999572    3430 kubelet_node_status.go:73] "Attempting to register node" node="pause-547166"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.129004    3430 kubelet_node_status.go:112] "Node was previously registered" node="pause-547166"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.129497    3430 kubelet_node_status.go:76] "Successfully registered node" node="pause-547166"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.131952    3430 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.133238    3430 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.475690    3430 apiserver.go:52] "Watching apiserver"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.478637    3430 topology_manager.go:215] "Topology Admit Handler" podUID="4e37fe79-cc67-4012-93b6-79ecc1f88ec7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7rb9n"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.478832    3430 topology_manager.go:215] "Topology Admit Handler" podUID="b8bf4bf8-52a8-4277-a373-bbeef065c3f5" podNamespace="kube-system" podName="kube-proxy-94v5m"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.489059    3430 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.492170    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8bf4bf8-52a8-4277-a373-bbeef065c3f5-xtables-lock\") pod \"kube-proxy-94v5m\" (UID: \"b8bf4bf8-52a8-4277-a373-bbeef065c3f5\") " pod="kube-system/kube-proxy-94v5m"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.492320    3430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8bf4bf8-52a8-4277-a373-bbeef065c3f5-lib-modules\") pod \"kube-proxy-94v5m\" (UID: \"b8bf4bf8-52a8-4277-a373-bbeef065c3f5\") " pod="kube-system/kube-proxy-94v5m"
	May 28 21:32:37 pause-547166 kubelet[3430]: I0528 21:32:37.779876    3430 scope.go:117] "RemoveContainer" containerID="32b435d362fd1970b9f1fca4268df3152c9a49524530b52b325893dcca65a19e"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-547166 -n pause-547166
helpers_test.go:261: (dbg) Run:  kubectl --context pause-547166 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (63.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (271.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-499466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-499466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m30.930393445s)

                                                
                                                
-- stdout --
	* [old-k8s-version-499466] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-499466" primary control-plane node in "old-k8s-version-499466" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:38:54.607657   64874 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:38:54.608080   64874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:38:54.608094   64874 out.go:304] Setting ErrFile to fd 2...
	I0528 21:38:54.608099   64874 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:38:54.608349   64874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:38:54.608928   64874 out.go:298] Setting JSON to false
	I0528 21:38:54.610113   64874 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4878,"bootTime":1716927457,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:38:54.610168   64874 start.go:139] virtualization: kvm guest
	I0528 21:38:54.612429   64874 out.go:177] * [old-k8s-version-499466] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:38:54.613610   64874 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:38:54.613683   64874 notify.go:220] Checking for updates...
	I0528 21:38:54.614849   64874 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:38:54.616250   64874 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:38:54.617495   64874 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:38:54.618760   64874 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:38:54.620057   64874 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:38:54.621692   64874 config.go:182] Loaded profile config "bridge-110727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:38:54.621809   64874 config.go:182] Loaded profile config "cert-expiration-257793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:38:54.621912   64874 config.go:182] Loaded profile config "enable-default-cni-110727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:38:54.622060   64874 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:38:54.660357   64874 out.go:177] * Using the kvm2 driver based on user configuration
	I0528 21:38:54.661626   64874 start.go:297] selected driver: kvm2
	I0528 21:38:54.661639   64874 start.go:901] validating driver "kvm2" against <nil>
	I0528 21:38:54.661650   64874 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:38:54.662496   64874 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:38:54.662588   64874 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:38:54.678607   64874 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:38:54.678658   64874 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 21:38:54.678858   64874 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:38:54.678909   64874 cni.go:84] Creating CNI manager for ""
	I0528 21:38:54.678921   64874 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:38:54.678929   64874 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0528 21:38:54.678977   64874 start.go:340] cluster config:
	{Name:old-k8s-version-499466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-499466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:38:54.679068   64874 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:38:54.680725   64874 out.go:177] * Starting "old-k8s-version-499466" primary control-plane node in "old-k8s-version-499466" cluster
	I0528 21:38:54.681697   64874 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 21:38:54.681731   64874 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0528 21:38:54.681754   64874 cache.go:56] Caching tarball of preloaded images
	I0528 21:38:54.681851   64874 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:38:54.681862   64874 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0528 21:38:54.681963   64874 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/config.json ...
	I0528 21:38:54.681987   64874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/config.json: {Name:mkbf9b6d953414fe7baac9d6851ad2bc3a1f804c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:38:54.682143   64874 start.go:360] acquireMachinesLock for old-k8s-version-499466: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:38:54.682177   64874 start.go:364] duration metric: took 16.848µs to acquireMachinesLock for "old-k8s-version-499466"
	I0528 21:38:54.682192   64874 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-499466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-499466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 21:38:54.682279   64874 start.go:125] createHost starting for "" (driver="kvm2")
	I0528 21:38:54.683690   64874 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 21:38:54.683834   64874 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:38:54.683878   64874 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:38:54.698906   64874 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32977
	I0528 21:38:54.699310   64874 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:38:54.699819   64874 main.go:141] libmachine: Using API Version  1
	I0528 21:38:54.699847   64874 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:38:54.700186   64874 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:38:54.700398   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetMachineName
	I0528 21:38:54.700584   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:38:54.700744   64874 start.go:159] libmachine.API.Create for "old-k8s-version-499466" (driver="kvm2")
	I0528 21:38:54.700780   64874 client.go:168] LocalClient.Create starting
	I0528 21:38:54.700817   64874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem
	I0528 21:38:54.700851   64874 main.go:141] libmachine: Decoding PEM data...
	I0528 21:38:54.700868   64874 main.go:141] libmachine: Parsing certificate...
	I0528 21:38:54.700934   64874 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem
	I0528 21:38:54.700962   64874 main.go:141] libmachine: Decoding PEM data...
	I0528 21:38:54.700984   64874 main.go:141] libmachine: Parsing certificate...
	I0528 21:38:54.701011   64874 main.go:141] libmachine: Running pre-create checks...
	I0528 21:38:54.701022   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .PreCreateCheck
	I0528 21:38:54.701358   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetConfigRaw
	I0528 21:38:54.701720   64874 main.go:141] libmachine: Creating machine...
	I0528 21:38:54.701739   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .Create
	I0528 21:38:54.701881   64874 main.go:141] libmachine: (old-k8s-version-499466) Creating KVM machine...
	I0528 21:38:54.703131   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found existing default KVM network
	I0528 21:38:54.704623   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:38:54.704478   64898 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012df90}
	I0528 21:38:54.704651   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | created network xml: 
	I0528 21:38:54.704664   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | <network>
	I0528 21:38:54.704670   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG |   <name>mk-old-k8s-version-499466</name>
	I0528 21:38:54.704676   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG |   <dns enable='no'/>
	I0528 21:38:54.704683   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG |   
	I0528 21:38:54.704689   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0528 21:38:54.704697   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG |     <dhcp>
	I0528 21:38:54.704703   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0528 21:38:54.704713   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG |     </dhcp>
	I0528 21:38:54.704723   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG |   </ip>
	I0528 21:38:54.704737   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG |   
	I0528 21:38:54.704750   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | </network>
	I0528 21:38:54.704759   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | 
	I0528 21:38:54.709953   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | trying to create private KVM network mk-old-k8s-version-499466 192.168.39.0/24...
	I0528 21:38:54.781989   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | private KVM network mk-old-k8s-version-499466 192.168.39.0/24 created
	I0528 21:38:54.782021   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:38:54.781949   64898 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:38:54.782036   64874 main.go:141] libmachine: (old-k8s-version-499466) Setting up store path in /home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466 ...
	I0528 21:38:54.782068   64874 main.go:141] libmachine: (old-k8s-version-499466) Building disk image from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 21:38:54.782149   64874 main.go:141] libmachine: (old-k8s-version-499466) Downloading /home/jenkins/minikube-integration/18966-3963/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 21:38:55.028734   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:38:55.028629   64898 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/id_rsa...
	I0528 21:38:55.308629   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:38:55.308484   64898 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/old-k8s-version-499466.rawdisk...
	I0528 21:38:55.308669   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Writing magic tar header
	I0528 21:38:55.308690   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Writing SSH key tar header
	I0528 21:38:55.308703   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:38:55.308592   64898 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466 ...
	I0528 21:38:55.308721   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466
	I0528 21:38:55.308738   64874 main.go:141] libmachine: (old-k8s-version-499466) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466 (perms=drwx------)
	I0528 21:38:55.308749   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines
	I0528 21:38:55.308762   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:38:55.308771   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963
	I0528 21:38:55.308781   64874 main.go:141] libmachine: (old-k8s-version-499466) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines (perms=drwxr-xr-x)
	I0528 21:38:55.308789   64874 main.go:141] libmachine: (old-k8s-version-499466) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube (perms=drwxr-xr-x)
	I0528 21:38:55.308798   64874 main.go:141] libmachine: (old-k8s-version-499466) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963 (perms=drwxrwxr-x)
	I0528 21:38:55.308806   64874 main.go:141] libmachine: (old-k8s-version-499466) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0528 21:38:55.308813   64874 main.go:141] libmachine: (old-k8s-version-499466) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0528 21:38:55.308820   64874 main.go:141] libmachine: (old-k8s-version-499466) Creating domain...
	I0528 21:38:55.308845   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0528 21:38:55.308855   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Checking permissions on dir: /home/jenkins
	I0528 21:38:55.308860   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Checking permissions on dir: /home
	I0528 21:38:55.308866   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Skipping /home - not owner
	I0528 21:38:55.310129   64874 main.go:141] libmachine: (old-k8s-version-499466) define libvirt domain using xml: 
	I0528 21:38:55.310153   64874 main.go:141] libmachine: (old-k8s-version-499466) <domain type='kvm'>
	I0528 21:38:55.310165   64874 main.go:141] libmachine: (old-k8s-version-499466)   <name>old-k8s-version-499466</name>
	I0528 21:38:55.310172   64874 main.go:141] libmachine: (old-k8s-version-499466)   <memory unit='MiB'>2200</memory>
	I0528 21:38:55.310181   64874 main.go:141] libmachine: (old-k8s-version-499466)   <vcpu>2</vcpu>
	I0528 21:38:55.310187   64874 main.go:141] libmachine: (old-k8s-version-499466)   <features>
	I0528 21:38:55.310196   64874 main.go:141] libmachine: (old-k8s-version-499466)     <acpi/>
	I0528 21:38:55.310204   64874 main.go:141] libmachine: (old-k8s-version-499466)     <apic/>
	I0528 21:38:55.310214   64874 main.go:141] libmachine: (old-k8s-version-499466)     <pae/>
	I0528 21:38:55.310235   64874 main.go:141] libmachine: (old-k8s-version-499466)     
	I0528 21:38:55.310249   64874 main.go:141] libmachine: (old-k8s-version-499466)   </features>
	I0528 21:38:55.310261   64874 main.go:141] libmachine: (old-k8s-version-499466)   <cpu mode='host-passthrough'>
	I0528 21:38:55.310270   64874 main.go:141] libmachine: (old-k8s-version-499466)   
	I0528 21:38:55.310280   64874 main.go:141] libmachine: (old-k8s-version-499466)   </cpu>
	I0528 21:38:55.310310   64874 main.go:141] libmachine: (old-k8s-version-499466)   <os>
	I0528 21:38:55.310335   64874 main.go:141] libmachine: (old-k8s-version-499466)     <type>hvm</type>
	I0528 21:38:55.310359   64874 main.go:141] libmachine: (old-k8s-version-499466)     <boot dev='cdrom'/>
	I0528 21:38:55.310370   64874 main.go:141] libmachine: (old-k8s-version-499466)     <boot dev='hd'/>
	I0528 21:38:55.310406   64874 main.go:141] libmachine: (old-k8s-version-499466)     <bootmenu enable='no'/>
	I0528 21:38:55.310426   64874 main.go:141] libmachine: (old-k8s-version-499466)   </os>
	I0528 21:38:55.310440   64874 main.go:141] libmachine: (old-k8s-version-499466)   <devices>
	I0528 21:38:55.310452   64874 main.go:141] libmachine: (old-k8s-version-499466)     <disk type='file' device='cdrom'>
	I0528 21:38:55.310481   64874 main.go:141] libmachine: (old-k8s-version-499466)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/boot2docker.iso'/>
	I0528 21:38:55.310510   64874 main.go:141] libmachine: (old-k8s-version-499466)       <target dev='hdc' bus='scsi'/>
	I0528 21:38:55.310519   64874 main.go:141] libmachine: (old-k8s-version-499466)       <readonly/>
	I0528 21:38:55.310524   64874 main.go:141] libmachine: (old-k8s-version-499466)     </disk>
	I0528 21:38:55.310530   64874 main.go:141] libmachine: (old-k8s-version-499466)     <disk type='file' device='disk'>
	I0528 21:38:55.310539   64874 main.go:141] libmachine: (old-k8s-version-499466)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0528 21:38:55.310550   64874 main.go:141] libmachine: (old-k8s-version-499466)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/old-k8s-version-499466.rawdisk'/>
	I0528 21:38:55.310556   64874 main.go:141] libmachine: (old-k8s-version-499466)       <target dev='hda' bus='virtio'/>
	I0528 21:38:55.310562   64874 main.go:141] libmachine: (old-k8s-version-499466)     </disk>
	I0528 21:38:55.310570   64874 main.go:141] libmachine: (old-k8s-version-499466)     <interface type='network'>
	I0528 21:38:55.310579   64874 main.go:141] libmachine: (old-k8s-version-499466)       <source network='mk-old-k8s-version-499466'/>
	I0528 21:38:55.310584   64874 main.go:141] libmachine: (old-k8s-version-499466)       <model type='virtio'/>
	I0528 21:38:55.310592   64874 main.go:141] libmachine: (old-k8s-version-499466)     </interface>
	I0528 21:38:55.310596   64874 main.go:141] libmachine: (old-k8s-version-499466)     <interface type='network'>
	I0528 21:38:55.310603   64874 main.go:141] libmachine: (old-k8s-version-499466)       <source network='default'/>
	I0528 21:38:55.310608   64874 main.go:141] libmachine: (old-k8s-version-499466)       <model type='virtio'/>
	I0528 21:38:55.310615   64874 main.go:141] libmachine: (old-k8s-version-499466)     </interface>
	I0528 21:38:55.310625   64874 main.go:141] libmachine: (old-k8s-version-499466)     <serial type='pty'>
	I0528 21:38:55.310633   64874 main.go:141] libmachine: (old-k8s-version-499466)       <target port='0'/>
	I0528 21:38:55.310638   64874 main.go:141] libmachine: (old-k8s-version-499466)     </serial>
	I0528 21:38:55.310644   64874 main.go:141] libmachine: (old-k8s-version-499466)     <console type='pty'>
	I0528 21:38:55.310649   64874 main.go:141] libmachine: (old-k8s-version-499466)       <target type='serial' port='0'/>
	I0528 21:38:55.310667   64874 main.go:141] libmachine: (old-k8s-version-499466)     </console>
	I0528 21:38:55.310682   64874 main.go:141] libmachine: (old-k8s-version-499466)     <rng model='virtio'>
	I0528 21:38:55.310697   64874 main.go:141] libmachine: (old-k8s-version-499466)       <backend model='random'>/dev/random</backend>
	I0528 21:38:55.310709   64874 main.go:141] libmachine: (old-k8s-version-499466)     </rng>
	I0528 21:38:55.310720   64874 main.go:141] libmachine: (old-k8s-version-499466)     
	I0528 21:38:55.310729   64874 main.go:141] libmachine: (old-k8s-version-499466)     
	I0528 21:38:55.310738   64874 main.go:141] libmachine: (old-k8s-version-499466)   </devices>
	I0528 21:38:55.310748   64874 main.go:141] libmachine: (old-k8s-version-499466) </domain>
	I0528 21:38:55.310761   64874 main.go:141] libmachine: (old-k8s-version-499466) 
	I0528 21:38:55.314706   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:a1:39:55 in network default
	I0528 21:38:55.315234   64874 main.go:141] libmachine: (old-k8s-version-499466) Ensuring networks are active...
	I0528 21:38:55.315248   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:38:55.315917   64874 main.go:141] libmachine: (old-k8s-version-499466) Ensuring network default is active
	I0528 21:38:55.316283   64874 main.go:141] libmachine: (old-k8s-version-499466) Ensuring network mk-old-k8s-version-499466 is active
	I0528 21:38:55.316787   64874 main.go:141] libmachine: (old-k8s-version-499466) Getting domain xml...
	I0528 21:38:55.317513   64874 main.go:141] libmachine: (old-k8s-version-499466) Creating domain...
	I0528 21:38:56.565811   64874 main.go:141] libmachine: (old-k8s-version-499466) Waiting to get IP...
	I0528 21:38:56.566671   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:38:56.567133   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:38:56.567149   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:38:56.567118   64898 retry.go:31] will retry after 200.872196ms: waiting for machine to come up
	I0528 21:38:56.769653   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:38:56.770217   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:38:56.770248   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:38:56.770184   64898 retry.go:31] will retry after 344.626094ms: waiting for machine to come up
	I0528 21:38:57.116716   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:38:57.117308   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:38:57.117356   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:38:57.117237   64898 retry.go:31] will retry after 440.780244ms: waiting for machine to come up
	I0528 21:38:57.559860   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:38:57.560412   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:38:57.560434   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:38:57.560369   64898 retry.go:31] will retry after 401.740461ms: waiting for machine to come up
	I0528 21:38:57.964113   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:38:57.964629   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:38:57.964655   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:38:57.964587   64898 retry.go:31] will retry after 467.681656ms: waiting for machine to come up
	I0528 21:38:58.434274   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:38:58.434884   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:38:58.434909   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:38:58.434837   64898 retry.go:31] will retry after 679.609629ms: waiting for machine to come up
	I0528 21:38:59.116286   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:38:59.116824   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:38:59.116861   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:38:59.116797   64898 retry.go:31] will retry after 996.850419ms: waiting for machine to come up
	I0528 21:39:00.114791   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:00.115341   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:39:00.115366   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:39:00.115286   64898 retry.go:31] will retry after 1.476299895s: waiting for machine to come up
	I0528 21:39:01.593802   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:01.594281   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:39:01.594305   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:39:01.594231   64898 retry.go:31] will retry after 1.585827174s: waiting for machine to come up
	I0528 21:39:03.182064   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:03.182664   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:39:03.182696   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:39:03.182610   64898 retry.go:31] will retry after 1.652626175s: waiting for machine to come up
	I0528 21:39:04.837168   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:04.837640   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:39:04.837664   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:39:04.837583   64898 retry.go:31] will retry after 1.77136399s: waiting for machine to come up
	I0528 21:39:06.610999   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:06.611517   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:39:06.611574   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:39:06.611502   64898 retry.go:31] will retry after 2.795396118s: waiting for machine to come up
	I0528 21:39:09.408350   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:09.408905   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:39:09.408933   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:39:09.408860   64898 retry.go:31] will retry after 3.638624784s: waiting for machine to come up
	I0528 21:39:13.050868   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:13.051550   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:39:13.051574   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:39:13.051513   64898 retry.go:31] will retry after 5.055044606s: waiting for machine to come up
	I0528 21:39:18.107669   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.108141   64874 main.go:141] libmachine: (old-k8s-version-499466) Found IP for machine: 192.168.39.8
	I0528 21:39:18.108166   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has current primary IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.108172   64874 main.go:141] libmachine: (old-k8s-version-499466) Reserving static IP address...
	I0528 21:39:18.108527   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-499466", mac: "52:54:00:04:bf:9b", ip: "192.168.39.8"} in network mk-old-k8s-version-499466
	I0528 21:39:18.183411   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Getting to WaitForSSH function...
	I0528 21:39:18.183443   64874 main.go:141] libmachine: (old-k8s-version-499466) Reserved static IP address: 192.168.39.8
	I0528 21:39:18.183458   64874 main.go:141] libmachine: (old-k8s-version-499466) Waiting for SSH to be available...
	I0528 21:39:18.186165   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.186559   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:minikube Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:18.186603   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.186747   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Using SSH client type: external
	I0528 21:39:18.186775   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/id_rsa (-rw-------)
	I0528 21:39:18.186820   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 21:39:18.186838   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | About to run SSH command:
	I0528 21:39:18.186874   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | exit 0
	I0528 21:39:18.306039   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | SSH cmd err, output: <nil>: 
	I0528 21:39:18.306320   64874 main.go:141] libmachine: (old-k8s-version-499466) KVM machine creation complete!
	I0528 21:39:18.306609   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetConfigRaw
	I0528 21:39:18.307120   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:39:18.307300   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:39:18.307474   64874 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0528 21:39:18.307487   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetState
	I0528 21:39:18.308849   64874 main.go:141] libmachine: Detecting operating system of created instance...
	I0528 21:39:18.308861   64874 main.go:141] libmachine: Waiting for SSH to be available...
	I0528 21:39:18.308866   64874 main.go:141] libmachine: Getting to WaitForSSH function...
	I0528 21:39:18.308871   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:39:18.311175   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.311530   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:18.311572   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.311670   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:39:18.311809   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:18.311957   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:18.312117   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:39:18.312274   64874 main.go:141] libmachine: Using SSH client type: native
	I0528 21:39:18.312455   64874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0528 21:39:18.312466   64874 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0528 21:39:18.409413   64874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:39:18.409447   64874 main.go:141] libmachine: Detecting the provisioner...
	I0528 21:39:18.409458   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:39:18.412206   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.412547   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:18.412575   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.412754   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:39:18.412944   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:18.413102   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:18.413242   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:39:18.413395   64874 main.go:141] libmachine: Using SSH client type: native
	I0528 21:39:18.413605   64874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0528 21:39:18.413616   64874 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0528 21:39:18.514472   64874 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0528 21:39:18.514545   64874 main.go:141] libmachine: found compatible host: buildroot
	I0528 21:39:18.514558   64874 main.go:141] libmachine: Provisioning with buildroot...
	I0528 21:39:18.514571   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetMachineName
	I0528 21:39:18.514806   64874 buildroot.go:166] provisioning hostname "old-k8s-version-499466"
	I0528 21:39:18.514829   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetMachineName
	I0528 21:39:18.515042   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:39:18.517636   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.517955   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:18.517982   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.518126   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:39:18.518307   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:18.518473   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:18.518629   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:39:18.518789   64874 main.go:141] libmachine: Using SSH client type: native
	I0528 21:39:18.518943   64874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0528 21:39:18.518954   64874 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-499466 && echo "old-k8s-version-499466" | sudo tee /etc/hostname
	I0528 21:39:18.634160   64874 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-499466
	
	I0528 21:39:18.634184   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:39:18.636822   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.637163   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:18.637191   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.637408   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:39:18.637568   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:18.637726   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:18.637881   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:39:18.638069   64874 main.go:141] libmachine: Using SSH client type: native
	I0528 21:39:18.638279   64874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0528 21:39:18.638307   64874 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-499466' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-499466/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-499466' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:39:18.743062   64874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:39:18.743092   64874 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:39:18.743133   64874 buildroot.go:174] setting up certificates
	I0528 21:39:18.743147   64874 provision.go:84] configureAuth start
	I0528 21:39:18.743168   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetMachineName
	I0528 21:39:18.743450   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetIP
	I0528 21:39:18.746359   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.746769   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:18.746795   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.746959   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:39:18.749239   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.749583   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:18.749611   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.749790   64874 provision.go:143] copyHostCerts
	I0528 21:39:18.749869   64874 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:39:18.749879   64874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:39:18.749940   64874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:39:18.750019   64874 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:39:18.750027   64874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:39:18.750051   64874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:39:18.750098   64874 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:39:18.750104   64874 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:39:18.750124   64874 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:39:18.750177   64874 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-499466 san=[127.0.0.1 192.168.39.8 localhost minikube old-k8s-version-499466]
	I0528 21:39:18.829587   64874 provision.go:177] copyRemoteCerts
	I0528 21:39:18.829639   64874 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:39:18.829662   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:39:18.832310   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.832666   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:18.832696   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.832890   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:39:18.833069   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:18.833242   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:39:18.833379   64874 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/id_rsa Username:docker}
	I0528 21:39:18.911962   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:39:18.935280   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0528 21:39:18.958743   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 21:39:18.983472   64874 provision.go:87] duration metric: took 240.306898ms to configureAuth
	I0528 21:39:18.983498   64874 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:39:18.983702   64874 config.go:182] Loaded profile config "old-k8s-version-499466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0528 21:39:18.983775   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:39:18.986539   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.986853   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:18.986881   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:18.987048   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:39:18.987235   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:18.987420   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:18.987587   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:39:18.987789   64874 main.go:141] libmachine: Using SSH client type: native
	I0528 21:39:18.987989   64874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0528 21:39:18.988008   64874 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:39:19.245474   64874 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:39:19.245496   64874 main.go:141] libmachine: Checking connection to Docker...
	I0528 21:39:19.245508   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetURL
	I0528 21:39:19.246753   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | Using libvirt version 6000000
	I0528 21:39:19.249286   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.249683   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:19.249713   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.249846   64874 main.go:141] libmachine: Docker is up and running!
	I0528 21:39:19.249866   64874 main.go:141] libmachine: Reticulating splines...
	I0528 21:39:19.249871   64874 client.go:171] duration metric: took 24.549081697s to LocalClient.Create
	I0528 21:39:19.249891   64874 start.go:167] duration metric: took 24.549148529s to libmachine.API.Create "old-k8s-version-499466"
	I0528 21:39:19.249901   64874 start.go:293] postStartSetup for "old-k8s-version-499466" (driver="kvm2")
	I0528 21:39:19.249913   64874 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:39:19.249934   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:39:19.250235   64874 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:39:19.250263   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:39:19.252519   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.252870   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:19.252899   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.253018   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:39:19.253207   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:19.253377   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:39:19.253495   64874 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/id_rsa Username:docker}
	I0528 21:39:19.333872   64874 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:39:19.338215   64874 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 21:39:19.338241   64874 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:39:19.338308   64874 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:39:19.338404   64874 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:39:19.338494   64874 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:39:19.347749   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:39:19.372165   64874 start.go:296] duration metric: took 122.252031ms for postStartSetup
	I0528 21:39:19.372205   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetConfigRaw
	I0528 21:39:19.372755   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetIP
	I0528 21:39:19.375732   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.376118   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:19.376146   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.376409   64874 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/config.json ...
	I0528 21:39:19.376568   64874 start.go:128] duration metric: took 24.694280834s to createHost
	I0528 21:39:19.376587   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:39:19.379011   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.379352   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:19.379386   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.379519   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:39:19.379674   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:19.379791   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:19.379923   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:39:19.380056   64874 main.go:141] libmachine: Using SSH client type: native
	I0528 21:39:19.380216   64874 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0528 21:39:19.380232   64874 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0528 21:39:19.478360   64874 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716932359.452828168
	
	I0528 21:39:19.478379   64874 fix.go:216] guest clock: 1716932359.452828168
	I0528 21:39:19.478385   64874 fix.go:229] Guest: 2024-05-28 21:39:19.452828168 +0000 UTC Remote: 2024-05-28 21:39:19.376577347 +0000 UTC m=+24.802909608 (delta=76.250821ms)
	I0528 21:39:19.478461   64874 fix.go:200] guest clock delta is within tolerance: 76.250821ms
	I0528 21:39:19.478467   64874 start.go:83] releasing machines lock for "old-k8s-version-499466", held for 24.796283279s
	I0528 21:39:19.478486   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:39:19.478759   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetIP
	I0528 21:39:19.481177   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.481565   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:19.481595   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.481712   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:39:19.482204   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:39:19.482371   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:39:19.482470   64874 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:39:19.482511   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:39:19.482567   64874 ssh_runner.go:195] Run: cat /version.json
	I0528 21:39:19.482588   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:39:19.485016   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.485311   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.485449   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:19.485524   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.485585   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:39:19.485737   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:19.485755   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:19.485756   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:19.485945   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:39:19.485954   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:39:19.486091   64874 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/id_rsa Username:docker}
	I0528 21:39:19.486161   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:39:19.486283   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:39:19.486446   64874 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/id_rsa Username:docker}
	I0528 21:39:19.572287   64874 ssh_runner.go:195] Run: systemctl --version
	I0528 21:39:19.594199   64874 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:39:19.755553   64874 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 21:39:19.762668   64874 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:39:19.762754   64874 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:39:19.779606   64874 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 21:39:19.779627   64874 start.go:494] detecting cgroup driver to use...
	I0528 21:39:19.779679   64874 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:39:19.799036   64874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:39:19.815568   64874 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:39:19.815638   64874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:39:19.831729   64874 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:39:19.845508   64874 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:39:19.969376   64874 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:39:20.111944   64874 docker.go:233] disabling docker service ...
	I0528 21:39:20.112026   64874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:39:20.127282   64874 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:39:20.144615   64874 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:39:20.286243   64874 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:39:20.430024   64874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:39:20.446574   64874 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:39:20.466965   64874 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0528 21:39:20.467038   64874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:39:20.481150   64874 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:39:20.481204   64874 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:39:20.492078   64874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:39:20.502178   64874 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:39:20.512853   64874 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:39:20.523587   64874 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:39:20.533890   64874 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 21:39:20.533944   64874 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 21:39:20.549526   64874 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:39:20.561211   64874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:39:20.693136   64874 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:39:20.835430   64874 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:39:20.835538   64874 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:39:20.842129   64874 start.go:562] Will wait 60s for crictl version
	I0528 21:39:20.842180   64874 ssh_runner.go:195] Run: which crictl
	I0528 21:39:20.846354   64874 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:39:20.889604   64874 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 21:39:20.889663   64874 ssh_runner.go:195] Run: crio --version
	I0528 21:39:20.921984   64874 ssh_runner.go:195] Run: crio --version
	I0528 21:39:20.954289   64874 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0528 21:39:20.955485   64874 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetIP
	I0528 21:39:20.958338   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:20.958654   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:39:09 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:39:20.958684   64874 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:39:20.958829   64874 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 21:39:20.963646   64874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:39:20.977850   64874 kubeadm.go:877] updating cluster {Name:old-k8s-version-499466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-499466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:39:20.977980   64874 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 21:39:20.978035   64874 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:39:21.017449   64874 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0528 21:39:21.017501   64874 ssh_runner.go:195] Run: which lz4
	I0528 21:39:21.021886   64874 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0528 21:39:21.026231   64874 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 21:39:21.026261   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0528 21:39:22.677352   64874 crio.go:462] duration metric: took 1.655490738s to copy over tarball
	I0528 21:39:22.677416   64874 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 21:39:25.553028   64874 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.875582718s)
	I0528 21:39:25.553053   64874 crio.go:469] duration metric: took 2.875674993s to extract the tarball
	I0528 21:39:25.553061   64874 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 21:39:25.597229   64874 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:39:25.650549   64874 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0528 21:39:25.650576   64874 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0528 21:39:25.650677   64874 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:39:25.650706   64874 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:39:25.650709   64874 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:39:25.650678   64874 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:39:25.650678   64874 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0528 21:39:25.650695   64874 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:39:25.650741   64874 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0528 21:39:25.650738   64874 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0528 21:39:25.652470   64874 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:39:25.653355   64874 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:39:25.653387   64874 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:39:25.653425   64874 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0528 21:39:25.653454   64874 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0528 21:39:25.653677   64874 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:39:25.653621   64874 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:39:25.653943   64874 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0528 21:39:25.783658   64874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:39:25.794568   64874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0528 21:39:25.816058   64874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:39:25.817184   64874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:39:25.828193   64874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0528 21:39:25.833281   64874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0528 21:39:25.898222   64874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:39:25.902158   64874 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0528 21:39:25.902201   64874 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:39:25.902241   64874 ssh_runner.go:195] Run: which crictl
	I0528 21:39:26.016325   64874 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0528 21:39:26.016369   64874 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0528 21:39:26.016400   64874 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0528 21:39:26.016414   64874 ssh_runner.go:195] Run: which crictl
	I0528 21:39:26.016419   64874 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:39:26.016421   64874 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0528 21:39:26.016437   64874 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0528 21:39:26.016469   64874 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0528 21:39:26.016472   64874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:39:26.016445   64874 ssh_runner.go:195] Run: which crictl
	I0528 21:39:26.016473   64874 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0528 21:39:26.016398   64874 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0528 21:39:26.016515   64874 ssh_runner.go:195] Run: which crictl
	I0528 21:39:26.016524   64874 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:39:26.016444   64874 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:39:26.016549   64874 ssh_runner.go:195] Run: which crictl
	I0528 21:39:26.016561   64874 ssh_runner.go:195] Run: which crictl
	I0528 21:39:26.016493   64874 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0528 21:39:26.016587   64874 ssh_runner.go:195] Run: which crictl
	I0528 21:39:26.034881   64874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0528 21:39:26.034961   64874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:39:26.034964   64874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0528 21:39:26.035006   64874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:39:26.112469   64874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0528 21:39:26.112595   64874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0528 21:39:26.112641   64874 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:39:26.184887   64874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0528 21:39:26.184960   64874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0528 21:39:26.185024   64874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0528 21:39:26.185214   64874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0528 21:39:26.202751   64874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0528 21:39:26.202815   64874 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0528 21:39:26.618275   64874 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:39:26.761312   64874 cache_images.go:92] duration metric: took 1.110713335s to LoadCachedImages
	W0528 21:39:26.761397   64874 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0528 21:39:26.761417   64874 kubeadm.go:928] updating node { 192.168.39.8 8443 v1.20.0 crio true true} ...
	I0528 21:39:26.761548   64874 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-499466 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-499466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:39:26.761626   64874 ssh_runner.go:195] Run: crio config
	I0528 21:39:26.819662   64874 cni.go:84] Creating CNI manager for ""
	I0528 21:39:26.819686   64874 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:39:26.819702   64874 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:39:26.819728   64874 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.8 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-499466 NodeName:old-k8s-version-499466 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0528 21:39:26.819913   64874 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-499466"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:39:26.819976   64874 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0528 21:39:26.839577   64874 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:39:26.839644   64874 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:39:26.849714   64874 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0528 21:39:26.867170   64874 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:39:26.886183   64874 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0528 21:39:26.904995   64874 ssh_runner.go:195] Run: grep 192.168.39.8	control-plane.minikube.internal$ /etc/hosts
	I0528 21:39:26.909387   64874 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:39:26.921662   64874 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:39:27.062057   64874 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:39:27.080446   64874 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466 for IP: 192.168.39.8
	I0528 21:39:27.080463   64874 certs.go:194] generating shared ca certs ...
	I0528 21:39:27.080475   64874 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:39:27.080590   64874 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 21:39:27.080622   64874 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 21:39:27.080632   64874 certs.go:256] generating profile certs ...
	I0528 21:39:27.080684   64874 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/client.key
	I0528 21:39:27.080697   64874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/client.crt with IP's: []
	I0528 21:39:27.378529   64874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/client.crt ...
	I0528 21:39:27.378566   64874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/client.crt: {Name:mk9ce377fa8e4769738cf95d9a55efd02085902b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:39:27.378751   64874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/client.key ...
	I0528 21:39:27.378772   64874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/client.key: {Name:mk4a94797a4a087660b6c9541333ad93a97da473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:39:27.378902   64874 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.key.2337190f
	I0528 21:39:27.378923   64874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.crt.2337190f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.8]
	I0528 21:39:27.447720   64874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.crt.2337190f ...
	I0528 21:39:27.447746   64874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.crt.2337190f: {Name:mk98ac8132284941449a339fe624a11abdb8bbc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:39:27.476157   64874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.key.2337190f ...
	I0528 21:39:27.476192   64874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.key.2337190f: {Name:mkc4fa3ada3de02e43b108ae3273ec46b870742b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:39:27.476335   64874 certs.go:381] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.crt.2337190f -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.crt
	I0528 21:39:27.476440   64874 certs.go:385] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.key.2337190f -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.key
	I0528 21:39:27.476602   64874 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/proxy-client.key
	I0528 21:39:27.476624   64874 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/proxy-client.crt with IP's: []
	I0528 21:39:27.997407   64874 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/proxy-client.crt ...
	I0528 21:39:27.997437   64874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/proxy-client.crt: {Name:mk7da2d3da24412f2ae7539ac3c6a2f9240f9317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:39:27.997581   64874 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/proxy-client.key ...
	I0528 21:39:27.997593   64874 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/proxy-client.key: {Name:mk1e14433d517b653aa5d93dc2bbfea00347a1d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:39:27.997778   64874 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 21:39:27.997825   64874 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 21:39:27.997835   64874 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 21:39:27.997856   64874 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 21:39:27.997880   64874 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:39:27.997900   64874 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 21:39:27.997948   64874 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:39:27.998577   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:39:28.030220   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:39:28.055705   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:39:28.096165   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:39:28.134093   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0528 21:39:28.161112   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 21:39:28.203156   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:39:28.228670   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 21:39:28.276705   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:39:28.305273   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 21:39:28.334957   64874 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 21:39:28.360036   64874 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:39:28.377903   64874 ssh_runner.go:195] Run: openssl version
	I0528 21:39:28.384887   64874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 21:39:28.398944   64874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 21:39:28.403508   64874 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:39:28.403557   64874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 21:39:28.410090   64874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 21:39:28.420781   64874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 21:39:28.431440   64874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 21:39:28.435650   64874 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:39:28.435690   64874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 21:39:28.441325   64874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 21:39:28.451798   64874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:39:28.462697   64874 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:39:28.467808   64874 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:39:28.467847   64874 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:39:28.473547   64874 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:39:28.484265   64874 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:39:28.488324   64874 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 21:39:28.488380   64874 kubeadm.go:391] StartCluster: {Name:old-k8s-version-499466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-499466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:39:28.488485   64874 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:39:28.488536   64874 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:39:28.528672   64874 cri.go:89] found id: ""
	I0528 21:39:28.528753   64874 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 21:39:28.539887   64874 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:39:28.550542   64874 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:39:28.561103   64874 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:39:28.561124   64874 kubeadm.go:156] found existing configuration files:
	
	I0528 21:39:28.561170   64874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:39:28.571092   64874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:39:28.571142   64874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:39:28.580641   64874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:39:28.590332   64874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:39:28.590390   64874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:39:28.600277   64874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:39:28.609686   64874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:39:28.609728   64874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:39:28.618994   64874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:39:28.630696   64874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:39:28.630736   64874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:39:28.640055   64874 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 21:39:29.046169   64874 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:41:27.357907   64874 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:41:27.358039   64874 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0528 21:41:27.359384   64874 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0528 21:41:27.359445   64874 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 21:41:27.359557   64874 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 21:41:27.359705   64874 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 21:41:27.359859   64874 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 21:41:27.359954   64874 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 21:41:27.361833   64874 out.go:204]   - Generating certificates and keys ...
	I0528 21:41:27.361924   64874 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 21:41:27.361984   64874 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 21:41:27.362071   64874 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 21:41:27.362151   64874 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 21:41:27.362225   64874 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 21:41:27.362296   64874 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 21:41:27.362357   64874 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 21:41:27.362492   64874 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-499466] and IPs [192.168.39.8 127.0.0.1 ::1]
	I0528 21:41:27.362583   64874 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 21:41:27.362714   64874 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-499466] and IPs [192.168.39.8 127.0.0.1 ::1]
	I0528 21:41:27.362799   64874 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 21:41:27.362884   64874 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 21:41:27.362949   64874 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 21:41:27.363043   64874 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 21:41:27.363131   64874 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 21:41:27.363221   64874 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 21:41:27.363325   64874 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 21:41:27.363387   64874 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 21:41:27.363516   64874 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 21:41:27.363594   64874 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 21:41:27.363629   64874 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 21:41:27.363711   64874 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 21:41:27.365091   64874 out.go:204]   - Booting up control plane ...
	I0528 21:41:27.365206   64874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 21:41:27.365303   64874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 21:41:27.365370   64874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 21:41:27.365452   64874 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 21:41:27.365594   64874 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0528 21:41:27.365658   64874 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:41:27.365766   64874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:41:27.366017   64874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:41:27.366098   64874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:41:27.366347   64874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:41:27.366409   64874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:41:27.366569   64874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:41:27.366631   64874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:41:27.366789   64874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:41:27.366848   64874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:41:27.367068   64874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:41:27.367079   64874 kubeadm.go:309] 
	I0528 21:41:27.367113   64874 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:41:27.367149   64874 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:41:27.367157   64874 kubeadm.go:309] 
	I0528 21:41:27.367186   64874 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:41:27.367220   64874 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:41:27.367315   64874 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:41:27.367322   64874 kubeadm.go:309] 
	I0528 21:41:27.367440   64874 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:41:27.367479   64874 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:41:27.367524   64874 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:41:27.367531   64874 kubeadm.go:309] 
	I0528 21:41:27.367644   64874 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:41:27.367738   64874 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:41:27.367747   64874 kubeadm.go:309] 
	I0528 21:41:27.367837   64874 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:41:27.367963   64874 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:41:27.368075   64874 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:41:27.368176   64874 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:41:27.368202   64874 kubeadm.go:309] 
	W0528 21:41:27.368304   64874 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-499466] and IPs [192.168.39.8 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-499466] and IPs [192.168.39.8 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-499466] and IPs [192.168.39.8 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-499466] and IPs [192.168.39.8 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0528 21:41:27.368348   64874 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0528 21:41:27.872271   64874 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:41:27.887418   64874 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:41:27.897087   64874 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:41:27.897105   64874 kubeadm.go:156] found existing configuration files:
	
	I0528 21:41:27.897141   64874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:41:27.906553   64874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:41:27.906593   64874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:41:27.916150   64874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:41:27.925396   64874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:41:27.925441   64874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:41:27.934862   64874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:41:27.943971   64874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:41:27.944004   64874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:41:27.953334   64874 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:41:27.961801   64874 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:41:27.961845   64874 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:41:27.970614   64874 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 21:41:28.038778   64874 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0528 21:41:28.038867   64874 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 21:41:28.181089   64874 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 21:41:28.181252   64874 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 21:41:28.181397   64874 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 21:41:28.353859   64874 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 21:41:28.355936   64874 out.go:204]   - Generating certificates and keys ...
	I0528 21:41:28.356051   64874 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 21:41:28.356147   64874 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 21:41:28.356251   64874 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 21:41:28.356351   64874 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0528 21:41:28.356468   64874 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0528 21:41:28.356556   64874 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0528 21:41:28.356656   64874 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0528 21:41:28.356885   64874 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0528 21:41:28.357243   64874 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 21:41:28.357574   64874 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 21:41:28.357643   64874 kubeadm.go:309] [certs] Using the existing "sa" key
	I0528 21:41:28.357735   64874 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 21:41:28.554078   64874 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 21:41:28.966148   64874 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 21:41:29.468187   64874 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 21:41:29.622446   64874 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 21:41:29.636848   64874 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 21:41:29.637898   64874 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 21:41:29.637971   64874 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 21:41:29.806864   64874 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 21:41:29.808447   64874 out.go:204]   - Booting up control plane ...
	I0528 21:41:29.808577   64874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 21:41:29.812249   64874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 21:41:29.813780   64874 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 21:41:29.814431   64874 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 21:41:29.816692   64874 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0528 21:42:09.819448   64874 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:42:09.819869   64874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:42:09.820057   64874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:42:14.820598   64874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:42:14.820855   64874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:42:24.821302   64874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:42:24.821481   64874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:42:44.820414   64874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:42:44.820609   64874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:43:24.820265   64874 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:43:24.820541   64874 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:43:24.820565   64874 kubeadm.go:309] 
	I0528 21:43:24.820621   64874 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:43:24.820679   64874 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:43:24.820689   64874 kubeadm.go:309] 
	I0528 21:43:24.820742   64874 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:43:24.820807   64874 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:43:24.820965   64874 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:43:24.820984   64874 kubeadm.go:309] 
	I0528 21:43:24.821082   64874 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:43:24.821115   64874 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:43:24.821142   64874 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:43:24.821146   64874 kubeadm.go:309] 
	I0528 21:43:24.821250   64874 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:43:24.821338   64874 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:43:24.821360   64874 kubeadm.go:309] 
	I0528 21:43:24.821464   64874 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:43:24.821591   64874 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:43:24.821691   64874 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:43:24.821802   64874 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:43:24.821814   64874 kubeadm.go:309] 
	I0528 21:43:24.822736   64874 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:43:24.822854   64874 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:43:24.822947   64874 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0528 21:43:24.823004   64874 kubeadm.go:393] duration metric: took 3m56.334627812s to StartCluster
	I0528 21:43:24.823060   64874 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:43:24.823112   64874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:43:24.872331   64874 cri.go:89] found id: ""
	I0528 21:43:24.872359   64874 logs.go:276] 0 containers: []
	W0528 21:43:24.872366   64874 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:43:24.872372   64874 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:43:24.872424   64874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:43:24.917358   64874 cri.go:89] found id: ""
	I0528 21:43:24.917387   64874 logs.go:276] 0 containers: []
	W0528 21:43:24.917395   64874 logs.go:278] No container was found matching "etcd"
	I0528 21:43:24.917401   64874 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:43:24.917446   64874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:43:24.959747   64874 cri.go:89] found id: ""
	I0528 21:43:24.959781   64874 logs.go:276] 0 containers: []
	W0528 21:43:24.959791   64874 logs.go:278] No container was found matching "coredns"
	I0528 21:43:24.959798   64874 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:43:24.959847   64874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:43:25.000522   64874 cri.go:89] found id: ""
	I0528 21:43:25.000548   64874 logs.go:276] 0 containers: []
	W0528 21:43:25.000559   64874 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:43:25.000566   64874 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:43:25.000626   64874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:43:25.042166   64874 cri.go:89] found id: ""
	I0528 21:43:25.042191   64874 logs.go:276] 0 containers: []
	W0528 21:43:25.042200   64874 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:43:25.042208   64874 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:43:25.042269   64874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:43:25.076228   64874 cri.go:89] found id: ""
	I0528 21:43:25.076258   64874 logs.go:276] 0 containers: []
	W0528 21:43:25.076268   64874 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:43:25.076275   64874 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:43:25.076332   64874 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:43:25.117156   64874 cri.go:89] found id: ""
	I0528 21:43:25.117189   64874 logs.go:276] 0 containers: []
	W0528 21:43:25.117199   64874 logs.go:278] No container was found matching "kindnet"
	I0528 21:43:25.117210   64874 logs.go:123] Gathering logs for kubelet ...
	I0528 21:43:25.117224   64874 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:43:25.165693   64874 logs.go:123] Gathering logs for dmesg ...
	I0528 21:43:25.165722   64874 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:43:25.179297   64874 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:43:25.179325   64874 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:43:25.351513   64874 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:43:25.351535   64874 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:43:25.351550   64874 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:43:25.450877   64874 logs.go:123] Gathering logs for container status ...
	I0528 21:43:25.450910   64874 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0528 21:43:25.489435   64874 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0528 21:43:25.489477   64874 out.go:239] * 
	* 
	W0528 21:43:25.489540   64874 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:43:25.489570   64874 out.go:239] * 
	* 
	W0528 21:43:25.490447   64874 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 21:43:25.493547   64874 out.go:177] 
	W0528 21:43:25.494545   64874 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:43:25.494597   64874 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0528 21:43:25.494620   64874 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0528 21:43:25.495814   64874 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-499466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466
E0528 21:43:25.573266   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466: exit status 6 (213.938163ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:43:25.750621   69461 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-499466" does not appear in /home/jenkins/minikube-integration/18966-3963/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-499466" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (271.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-290122 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-290122 --alsologtostderr -v=3: exit status 82 (2m0.488818285s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-290122"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:41:32.797241   68901 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:41:32.797802   68901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:41:32.797819   68901 out.go:304] Setting ErrFile to fd 2...
	I0528 21:41:32.797826   68901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:41:32.798224   68901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:41:32.798728   68901 out.go:298] Setting JSON to false
	I0528 21:41:32.798863   68901 mustload.go:65] Loading cluster: no-preload-290122
	I0528 21:41:32.799206   68901 config.go:182] Loaded profile config "no-preload-290122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:41:32.799278   68901 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/no-preload-290122/config.json ...
	I0528 21:41:32.799437   68901 mustload.go:65] Loading cluster: no-preload-290122
	I0528 21:41:32.799530   68901 config.go:182] Loaded profile config "no-preload-290122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:41:32.799552   68901 stop.go:39] StopHost: no-preload-290122
	I0528 21:41:32.799965   68901 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:41:32.800024   68901 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:41:32.814462   68901 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44807
	I0528 21:41:32.814992   68901 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:41:32.815553   68901 main.go:141] libmachine: Using API Version  1
	I0528 21:41:32.815576   68901 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:41:32.815873   68901 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:41:32.818190   68901 out.go:177] * Stopping node "no-preload-290122"  ...
	I0528 21:41:32.819467   68901 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0528 21:41:32.819516   68901 main.go:141] libmachine: (no-preload-290122) Calling .DriverName
	I0528 21:41:32.819744   68901 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0528 21:41:32.819769   68901 main.go:141] libmachine: (no-preload-290122) Calling .GetSSHHostname
	I0528 21:41:32.822858   68901 main.go:141] libmachine: (no-preload-290122) DBG | domain no-preload-290122 has defined MAC address 52:54:00:53:d8:0b in network mk-no-preload-290122
	I0528 21:41:32.823276   68901 main.go:141] libmachine: (no-preload-290122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:d8:0b", ip: ""} in network mk-no-preload-290122: {Iface:virbr2 ExpiryTime:2024-05-28 22:39:46 +0000 UTC Type:0 Mac:52:54:00:53:d8:0b Iaid: IPaddr:192.168.50.138 Prefix:24 Hostname:no-preload-290122 Clientid:01:52:54:00:53:d8:0b}
	I0528 21:41:32.823326   68901 main.go:141] libmachine: (no-preload-290122) DBG | domain no-preload-290122 has defined IP address 192.168.50.138 and MAC address 52:54:00:53:d8:0b in network mk-no-preload-290122
	I0528 21:41:32.823437   68901 main.go:141] libmachine: (no-preload-290122) Calling .GetSSHPort
	I0528 21:41:32.823598   68901 main.go:141] libmachine: (no-preload-290122) Calling .GetSSHKeyPath
	I0528 21:41:32.823755   68901 main.go:141] libmachine: (no-preload-290122) Calling .GetSSHUsername
	I0528 21:41:32.823904   68901 sshutil.go:53] new ssh client: &{IP:192.168.50.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/no-preload-290122/id_rsa Username:docker}
	I0528 21:41:32.912191   68901 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0528 21:41:32.972545   68901 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0528 21:41:33.040364   68901 main.go:141] libmachine: Stopping "no-preload-290122"...
	I0528 21:41:33.040410   68901 main.go:141] libmachine: (no-preload-290122) Calling .GetState
	I0528 21:41:33.041988   68901 main.go:141] libmachine: (no-preload-290122) Calling .Stop
	I0528 21:41:33.045454   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 0/120
	I0528 21:41:34.046852   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 1/120
	I0528 21:41:35.048258   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 2/120
	I0528 21:41:36.049535   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 3/120
	I0528 21:41:37.050718   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 4/120
	I0528 21:41:38.052671   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 5/120
	I0528 21:41:39.054278   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 6/120
	I0528 21:41:40.056251   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 7/120
	I0528 21:41:41.058090   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 8/120
	I0528 21:41:42.059827   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 9/120
	I0528 21:41:43.062075   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 10/120
	I0528 21:41:44.064378   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 11/120
	I0528 21:41:45.065803   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 12/120
	I0528 21:41:46.067250   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 13/120
	I0528 21:41:47.068719   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 14/120
	I0528 21:41:48.070736   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 15/120
	I0528 21:41:49.071974   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 16/120
	I0528 21:41:50.073300   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 17/120
	I0528 21:41:51.074677   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 18/120
	I0528 21:41:52.076169   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 19/120
	I0528 21:41:53.078548   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 20/120
	I0528 21:41:54.080657   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 21/120
	I0528 21:41:55.082236   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 22/120
	I0528 21:41:56.083716   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 23/120
	I0528 21:41:57.085265   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 24/120
	I0528 21:41:58.087508   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 25/120
	I0528 21:41:59.089019   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 26/120
	I0528 21:42:00.090415   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 27/120
	I0528 21:42:01.091760   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 28/120
	I0528 21:42:02.093184   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 29/120
	I0528 21:42:03.095354   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 30/120
	I0528 21:42:04.096875   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 31/120
	I0528 21:42:05.098358   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 32/120
	I0528 21:42:06.099907   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 33/120
	I0528 21:42:07.101258   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 34/120
	I0528 21:42:08.103157   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 35/120
	I0528 21:42:09.104640   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 36/120
	I0528 21:42:10.106160   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 37/120
	I0528 21:42:11.107478   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 38/120
	I0528 21:42:12.108995   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 39/120
	I0528 21:42:13.111296   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 40/120
	I0528 21:42:14.112826   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 41/120
	I0528 21:42:15.114335   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 42/120
	I0528 21:42:16.115701   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 43/120
	I0528 21:42:17.117041   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 44/120
	I0528 21:42:18.118963   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 45/120
	I0528 21:42:19.120382   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 46/120
	I0528 21:42:20.121834   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 47/120
	I0528 21:42:21.123192   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 48/120
	I0528 21:42:22.124561   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 49/120
	I0528 21:42:23.126971   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 50/120
	I0528 21:42:24.128423   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 51/120
	I0528 21:42:25.129936   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 52/120
	I0528 21:42:26.132204   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 53/120
	I0528 21:42:27.133668   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 54/120
	I0528 21:42:28.135930   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 55/120
	I0528 21:42:29.137285   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 56/120
	I0528 21:42:30.139202   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 57/120
	I0528 21:42:31.140710   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 58/120
	I0528 21:42:32.142235   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 59/120
	I0528 21:42:33.144508   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 60/120
	I0528 21:42:34.145897   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 61/120
	I0528 21:42:35.147211   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 62/120
	I0528 21:42:36.148495   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 63/120
	I0528 21:42:37.149794   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 64/120
	I0528 21:42:38.151765   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 65/120
	I0528 21:42:39.153499   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 66/120
	I0528 21:42:40.154948   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 67/120
	I0528 21:42:41.156256   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 68/120
	I0528 21:42:42.157579   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 69/120
	I0528 21:42:43.159673   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 70/120
	I0528 21:42:44.161005   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 71/120
	I0528 21:42:45.162491   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 72/120
	I0528 21:42:46.163859   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 73/120
	I0528 21:42:47.165330   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 74/120
	I0528 21:42:48.167289   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 75/120
	I0528 21:42:49.169045   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 76/120
	I0528 21:42:50.170774   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 77/120
	I0528 21:42:51.172064   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 78/120
	I0528 21:42:52.173466   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 79/120
	I0528 21:42:53.175726   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 80/120
	I0528 21:42:54.177048   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 81/120
	I0528 21:42:55.178335   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 82/120
	I0528 21:42:56.179592   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 83/120
	I0528 21:42:57.180944   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 84/120
	I0528 21:42:58.182637   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 85/120
	I0528 21:42:59.183960   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 86/120
	I0528 21:43:00.185261   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 87/120
	I0528 21:43:01.186602   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 88/120
	I0528 21:43:02.187843   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 89/120
	I0528 21:43:03.189942   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 90/120
	I0528 21:43:04.192150   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 91/120
	I0528 21:43:05.193587   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 92/120
	I0528 21:43:06.194955   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 93/120
	I0528 21:43:07.196264   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 94/120
	I0528 21:43:08.198269   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 95/120
	I0528 21:43:09.199602   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 96/120
	I0528 21:43:10.201033   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 97/120
	I0528 21:43:11.202452   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 98/120
	I0528 21:43:12.203806   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 99/120
	I0528 21:43:13.206165   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 100/120
	I0528 21:43:14.207614   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 101/120
	I0528 21:43:15.209158   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 102/120
	I0528 21:43:16.210367   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 103/120
	I0528 21:43:17.211704   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 104/120
	I0528 21:43:18.213835   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 105/120
	I0528 21:43:19.215330   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 106/120
	I0528 21:43:20.216746   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 107/120
	I0528 21:43:21.218154   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 108/120
	I0528 21:43:22.219743   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 109/120
	I0528 21:43:23.222311   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 110/120
	I0528 21:43:24.224392   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 111/120
	I0528 21:43:25.225840   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 112/120
	I0528 21:43:26.227583   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 113/120
	I0528 21:43:27.228948   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 114/120
	I0528 21:43:28.230825   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 115/120
	I0528 21:43:29.232204   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 116/120
	I0528 21:43:30.233595   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 117/120
	I0528 21:43:31.235015   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 118/120
	I0528 21:43:32.236199   68901 main.go:141] libmachine: (no-preload-290122) Waiting for machine to stop 119/120
	I0528 21:43:33.237424   68901 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0528 21:43:33.237467   68901 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0528 21:43:33.239322   68901 out.go:177] 
	W0528 21:43:33.240563   68901 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0528 21:43:33.240578   68901 out.go:239] * 
	* 
	W0528 21:43:33.243160   68901 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 21:43:33.244370   68901 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-290122 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-290122 -n no-preload-290122
E0528 21:43:40.934199   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-290122 -n no-preload-290122: exit status 3 (18.556376312s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:43:51.802079   69607 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host
	E0528 21:43:51.802099   69607 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-290122" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-595279 --alsologtostderr -v=3
E0528 21:41:46.371997   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:41:55.337608   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:41:55.342872   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:41:55.353112   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:41:55.373397   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:41:55.413736   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:41:55.494073   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:41:55.654481   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:41:55.974952   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:41:56.612373   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:41:56.615564   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:41:57.895813   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:42:00.456745   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:42:05.577592   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:42:15.818272   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:42:16.484914   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:42:17.092631   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:42:29.605809   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory
E0528 21:42:36.299323   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:42:37.451299   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 21:42:58.052992   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:43:17.260070   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:43:20.453460   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:43:20.458697   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:43:20.468955   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:43:20.489187   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:43:20.529461   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:43:20.609825   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:43:20.770220   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:43:21.090918   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:43:21.731849   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:43:23.012839   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-595279 --alsologtostderr -v=3: exit status 82 (2m0.494283061s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-595279"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:41:41.926506   69002 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:41:41.926771   69002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:41:41.926782   69002 out.go:304] Setting ErrFile to fd 2...
	I0528 21:41:41.926786   69002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:41:41.926983   69002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:41:41.927190   69002 out.go:298] Setting JSON to false
	I0528 21:41:41.927273   69002 mustload.go:65] Loading cluster: embed-certs-595279
	I0528 21:41:41.927588   69002 config.go:182] Loaded profile config "embed-certs-595279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:41:41.927648   69002 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/embed-certs-595279/config.json ...
	I0528 21:41:41.927816   69002 mustload.go:65] Loading cluster: embed-certs-595279
	I0528 21:41:41.927917   69002 config.go:182] Loaded profile config "embed-certs-595279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:41:41.927939   69002 stop.go:39] StopHost: embed-certs-595279
	I0528 21:41:41.928304   69002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:41:41.928353   69002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:41:41.943521   69002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33503
	I0528 21:41:41.943964   69002 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:41:41.944500   69002 main.go:141] libmachine: Using API Version  1
	I0528 21:41:41.944522   69002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:41:41.944882   69002 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:41:41.947111   69002 out.go:177] * Stopping node "embed-certs-595279"  ...
	I0528 21:41:41.948692   69002 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0528 21:41:41.948726   69002 main.go:141] libmachine: (embed-certs-595279) Calling .DriverName
	I0528 21:41:41.948935   69002 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0528 21:41:41.948957   69002 main.go:141] libmachine: (embed-certs-595279) Calling .GetSSHHostname
	I0528 21:41:41.951706   69002 main.go:141] libmachine: (embed-certs-595279) DBG | domain embed-certs-595279 has defined MAC address 52:54:00:13:28:f8 in network mk-embed-certs-595279
	I0528 21:41:41.952149   69002 main.go:141] libmachine: (embed-certs-595279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:28:f8", ip: ""} in network mk-embed-certs-595279: {Iface:virbr3 ExpiryTime:2024-05-28 22:40:09 +0000 UTC Type:0 Mac:52:54:00:13:28:f8 Iaid: IPaddr:192.168.61.79 Prefix:24 Hostname:embed-certs-595279 Clientid:01:52:54:00:13:28:f8}
	I0528 21:41:41.952176   69002 main.go:141] libmachine: (embed-certs-595279) DBG | domain embed-certs-595279 has defined IP address 192.168.61.79 and MAC address 52:54:00:13:28:f8 in network mk-embed-certs-595279
	I0528 21:41:41.952280   69002 main.go:141] libmachine: (embed-certs-595279) Calling .GetSSHPort
	I0528 21:41:41.952490   69002 main.go:141] libmachine: (embed-certs-595279) Calling .GetSSHKeyPath
	I0528 21:41:41.952664   69002 main.go:141] libmachine: (embed-certs-595279) Calling .GetSSHUsername
	I0528 21:41:41.952800   69002 sshutil.go:53] new ssh client: &{IP:192.168.61.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/embed-certs-595279/id_rsa Username:docker}
	I0528 21:41:42.059048   69002 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0528 21:41:42.125960   69002 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0528 21:41:42.180964   69002 main.go:141] libmachine: Stopping "embed-certs-595279"...
	I0528 21:41:42.181005   69002 main.go:141] libmachine: (embed-certs-595279) Calling .GetState
	I0528 21:41:42.182404   69002 main.go:141] libmachine: (embed-certs-595279) Calling .Stop
	I0528 21:41:42.185957   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 0/120
	I0528 21:41:43.187410   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 1/120
	I0528 21:41:44.188877   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 2/120
	I0528 21:41:45.190166   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 3/120
	I0528 21:41:46.192312   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 4/120
	I0528 21:41:47.194437   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 5/120
	I0528 21:41:48.195951   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 6/120
	I0528 21:41:49.197474   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 7/120
	I0528 21:41:50.198763   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 8/120
	I0528 21:41:51.200112   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 9/120
	I0528 21:41:52.202380   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 10/120
	I0528 21:41:53.204171   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 11/120
	I0528 21:41:54.205541   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 12/120
	I0528 21:41:55.207183   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 13/120
	I0528 21:41:56.208585   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 14/120
	I0528 21:41:57.210697   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 15/120
	I0528 21:41:58.212198   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 16/120
	I0528 21:41:59.213805   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 17/120
	I0528 21:42:00.215032   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 18/120
	I0528 21:42:01.216361   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 19/120
	I0528 21:42:02.218804   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 20/120
	I0528 21:42:03.220123   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 21/120
	I0528 21:42:04.221509   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 22/120
	I0528 21:42:05.222821   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 23/120
	I0528 21:42:06.224449   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 24/120
	I0528 21:42:07.226400   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 25/120
	I0528 21:42:08.227643   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 26/120
	I0528 21:42:09.228906   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 27/120
	I0528 21:42:10.230370   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 28/120
	I0528 21:42:11.231643   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 29/120
	I0528 21:42:12.233857   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 30/120
	I0528 21:42:13.235194   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 31/120
	I0528 21:42:14.236488   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 32/120
	I0528 21:42:15.238076   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 33/120
	I0528 21:42:16.240213   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 34/120
	I0528 21:42:17.242059   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 35/120
	I0528 21:42:18.244341   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 36/120
	I0528 21:42:19.245674   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 37/120
	I0528 21:42:20.247033   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 38/120
	I0528 21:42:21.248385   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 39/120
	I0528 21:42:22.250398   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 40/120
	I0528 21:42:23.251828   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 41/120
	I0528 21:42:24.253092   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 42/120
	I0528 21:42:25.254623   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 43/120
	I0528 21:42:26.255835   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 44/120
	I0528 21:42:27.257730   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 45/120
	I0528 21:42:28.259104   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 46/120
	I0528 21:42:29.260271   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 47/120
	I0528 21:42:30.261530   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 48/120
	I0528 21:42:31.262859   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 49/120
	I0528 21:42:32.265018   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 50/120
	I0528 21:42:33.266305   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 51/120
	I0528 21:42:34.267479   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 52/120
	I0528 21:42:35.268694   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 53/120
	I0528 21:42:36.270045   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 54/120
	I0528 21:42:37.271861   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 55/120
	I0528 21:42:38.273085   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 56/120
	I0528 21:42:39.274354   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 57/120
	I0528 21:42:40.275670   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 58/120
	I0528 21:42:41.277085   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 59/120
	I0528 21:42:42.279099   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 60/120
	I0528 21:42:43.280475   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 61/120
	I0528 21:42:44.281792   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 62/120
	I0528 21:42:45.283470   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 63/120
	I0528 21:42:46.284760   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 64/120
	I0528 21:42:47.287006   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 65/120
	I0528 21:42:48.288977   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 66/120
	I0528 21:42:49.290510   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 67/120
	I0528 21:42:50.292413   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 68/120
	I0528 21:42:51.293820   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 69/120
	I0528 21:42:52.295947   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 70/120
	I0528 21:42:53.297270   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 71/120
	I0528 21:42:54.299083   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 72/120
	I0528 21:42:55.300305   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 73/120
	I0528 21:42:56.301577   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 74/120
	I0528 21:42:57.303801   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 75/120
	I0528 21:42:58.304984   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 76/120
	I0528 21:42:59.306310   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 77/120
	I0528 21:43:00.307558   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 78/120
	I0528 21:43:01.308725   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 79/120
	I0528 21:43:02.310880   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 80/120
	I0528 21:43:03.312138   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 81/120
	I0528 21:43:04.313709   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 82/120
	I0528 21:43:05.314961   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 83/120
	I0528 21:43:06.316226   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 84/120
	I0528 21:43:07.318383   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 85/120
	I0528 21:43:08.319832   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 86/120
	I0528 21:43:09.321809   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 87/120
	I0528 21:43:10.323082   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 88/120
	I0528 21:43:11.324480   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 89/120
	I0528 21:43:12.326602   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 90/120
	I0528 21:43:13.328207   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 91/120
	I0528 21:43:14.329603   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 92/120
	I0528 21:43:15.330873   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 93/120
	I0528 21:43:16.331982   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 94/120
	I0528 21:43:17.333778   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 95/120
	I0528 21:43:18.335273   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 96/120
	I0528 21:43:19.336563   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 97/120
	I0528 21:43:20.338017   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 98/120
	I0528 21:43:21.340178   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 99/120
	I0528 21:43:22.342258   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 100/120
	I0528 21:43:23.343685   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 101/120
	I0528 21:43:24.344995   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 102/120
	I0528 21:43:25.346886   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 103/120
	I0528 21:43:26.348177   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 104/120
	I0528 21:43:27.350218   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 105/120
	I0528 21:43:28.351366   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 106/120
	I0528 21:43:29.352713   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 107/120
	I0528 21:43:30.354276   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 108/120
	I0528 21:43:31.356236   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 109/120
	I0528 21:43:32.358104   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 110/120
	I0528 21:43:33.359956   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 111/120
	I0528 21:43:34.361278   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 112/120
	I0528 21:43:35.362491   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 113/120
	I0528 21:43:36.364415   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 114/120
	I0528 21:43:37.366444   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 115/120
	I0528 21:43:38.367653   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 116/120
	I0528 21:43:39.368959   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 117/120
	I0528 21:43:40.370261   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 118/120
	I0528 21:43:41.371545   69002 main.go:141] libmachine: (embed-certs-595279) Waiting for machine to stop 119/120
	I0528 21:43:42.372921   69002 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0528 21:43:42.372985   69002 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0528 21:43:42.374909   69002 out.go:177] 
	W0528 21:43:42.376296   69002 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0528 21:43:42.376312   69002 out.go:239] * 
	* 
	W0528 21:43:42.378847   69002 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 21:43:42.380237   69002 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-595279 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-595279 -n embed-certs-595279
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-595279 -n embed-certs-595279: exit status 3 (18.635436996s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:44:01.018059   69669 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.79:22: connect: no route to host
	E0528 21:44:01.018079   69669 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.79:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-595279" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-499466 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-499466 create -f testdata/busybox.yaml: exit status 1 (41.012125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-499466" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-499466 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466: exit status 6 (207.761983ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:43:26.001176   69500 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-499466" does not appear in /home/jenkins/minikube-integration/18966-3963/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-499466" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466: exit status 6 (208.211867ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:43:26.209460   69531 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-499466" does not appear in /home/jenkins/minikube-integration/18966-3963/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-499466" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (98.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-499466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0528 21:43:30.693993   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-499466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m38.276618342s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-499466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-499466 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-499466 describe deploy/metrics-server -n kube-system: exit status 1 (43.70209ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-499466" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-499466 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466: exit status 6 (212.395102ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:45:04.742097   70263 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-499466" does not appear in /home/jenkins/minikube-integration/18966-3963/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-499466" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (98.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-290122 -n no-preload-290122
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-290122 -n no-preload-290122: exit status 3 (3.167942724s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:43:54.970094   69740 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host
	E0528 21:43:54.970123   69740 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-290122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-290122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15589316s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-290122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-290122 -n no-preload-290122
E0528 21:44:01.415030   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:44:04.182234   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:44:04.188351   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-290122 -n no-preload-290122: exit status 3 (3.063862194s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:44:04.190055   69850 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host
	E0528 21:44:04.190075   69850 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.138:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-290122" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-595279 -n embed-certs-595279
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-595279 -n embed-certs-595279: exit status 3 (3.167792601s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:44:04.186049   69820 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.79:22: connect: no route to host
	E0528 21:44:04.186064   69820 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.79:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-595279 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-595279 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.1516991s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.79:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-595279 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-595279 -n embed-certs-595279
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-595279 -n embed-certs-595279: exit status 3 (3.064123657s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:44:13.402153   69956 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.79:22: connect: no route to host
	E0528 21:44:13.402171   69956 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.79:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-595279" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (737.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-499466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0528 21:45:13.445975   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory
E0528 21:45:26.108453   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:45:46.972995   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:46:04.296523   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:46:36.131819   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:46:48.028766   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:46:55.337887   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-499466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m15.898077845s)

                                                
                                                
-- stdout --
	* [old-k8s-version-499466] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-499466" primary control-plane node in "old-k8s-version-499466" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-499466" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:45:09.511734   70393 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:45:09.512015   70393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:45:09.512024   70393 out.go:304] Setting ErrFile to fd 2...
	I0528 21:45:09.512029   70393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:45:09.512230   70393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:45:09.512722   70393 out.go:298] Setting JSON to false
	I0528 21:45:09.513628   70393 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5252,"bootTime":1716927457,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:45:09.513688   70393 start.go:139] virtualization: kvm guest
	I0528 21:45:09.515710   70393 out.go:177] * [old-k8s-version-499466] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:45:09.516851   70393 notify.go:220] Checking for updates...
	I0528 21:45:09.516855   70393 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:45:09.518143   70393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:45:09.519313   70393 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:45:09.520458   70393 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:45:09.521564   70393 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:45:09.522750   70393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:45:09.524143   70393 config.go:182] Loaded profile config "old-k8s-version-499466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0528 21:45:09.524521   70393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:45:09.524564   70393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:45:09.538978   70393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40875
	I0528 21:45:09.539311   70393 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:45:09.539762   70393 main.go:141] libmachine: Using API Version  1
	I0528 21:45:09.539785   70393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:45:09.540071   70393 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:45:09.540270   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:45:09.541692   70393 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0528 21:45:09.542685   70393 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:45:09.542974   70393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:45:09.543016   70393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:45:09.556837   70393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41363
	I0528 21:45:09.557242   70393 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:45:09.557708   70393 main.go:141] libmachine: Using API Version  1
	I0528 21:45:09.557733   70393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:45:09.558014   70393 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:45:09.558272   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:45:09.591821   70393 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:45:09.593187   70393 start.go:297] selected driver: kvm2
	I0528 21:45:09.593202   70393 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-499466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-499466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:45:09.593310   70393 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:45:09.594048   70393 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:45:09.594116   70393 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:45:09.608513   70393 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:45:09.608837   70393 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:45:09.608887   70393 cni.go:84] Creating CNI manager for ""
	I0528 21:45:09.608900   70393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:45:09.608947   70393 start.go:340] cluster config:
	{Name:old-k8s-version-499466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-499466 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:45:09.609033   70393 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:45:09.610774   70393 out.go:177] * Starting "old-k8s-version-499466" primary control-plane node in "old-k8s-version-499466" cluster
	I0528 21:45:09.611958   70393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 21:45:09.611994   70393 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0528 21:45:09.612009   70393 cache.go:56] Caching tarball of preloaded images
	I0528 21:45:09.612080   70393 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:45:09.612090   70393 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0528 21:45:09.612179   70393 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/config.json ...
	I0528 21:45:09.612358   70393 start.go:360] acquireMachinesLock for old-k8s-version-499466: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:49:00.318885   70393 start.go:364] duration metric: took 3m50.706496372s to acquireMachinesLock for "old-k8s-version-499466"
	I0528 21:49:00.318951   70393 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:49:00.318974   70393 fix.go:54] fixHost starting: 
	I0528 21:49:00.319345   70393 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:49:00.319377   70393 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:49:00.335827   70393 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0528 21:49:00.336165   70393 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:49:00.336583   70393 main.go:141] libmachine: Using API Version  1
	I0528 21:49:00.336608   70393 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:49:00.336895   70393 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:49:00.337075   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:49:00.337212   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetState
	I0528 21:49:00.338384   70393 fix.go:112] recreateIfNeeded on old-k8s-version-499466: state=Stopped err=<nil>
	I0528 21:49:00.338418   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	W0528 21:49:00.338566   70393 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:49:00.340994   70393 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-499466" ...
	I0528 21:49:00.342632   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .Start
	I0528 21:49:00.342765   70393 main.go:141] libmachine: (old-k8s-version-499466) Ensuring networks are active...
	I0528 21:49:00.343330   70393 main.go:141] libmachine: (old-k8s-version-499466) Ensuring network default is active
	I0528 21:49:00.343629   70393 main.go:141] libmachine: (old-k8s-version-499466) Ensuring network mk-old-k8s-version-499466 is active
	I0528 21:49:00.343943   70393 main.go:141] libmachine: (old-k8s-version-499466) Getting domain xml...
	I0528 21:49:00.344524   70393 main.go:141] libmachine: (old-k8s-version-499466) Creating domain...
	I0528 21:49:01.617557   70393 main.go:141] libmachine: (old-k8s-version-499466) Waiting to get IP...
	I0528 21:49:01.618628   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:01.619030   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:49:01.619067   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:49:01.619003   71515 retry.go:31] will retry after 235.861567ms: waiting for machine to come up
	I0528 21:49:01.856598   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:01.857024   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:49:01.857051   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:49:01.856996   71515 retry.go:31] will retry after 383.755883ms: waiting for machine to come up
	I0528 21:49:02.242787   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:02.243213   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:49:02.243253   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:49:02.243180   71515 retry.go:31] will retry after 459.795306ms: waiting for machine to come up
	I0528 21:49:02.704904   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:02.705385   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:49:02.705414   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:49:02.705341   71515 retry.go:31] will retry after 500.689566ms: waiting for machine to come up
	I0528 21:49:03.207898   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:03.208459   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:49:03.208492   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:49:03.208390   71515 retry.go:31] will retry after 526.795373ms: waiting for machine to come up
	I0528 21:49:03.737303   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:03.737912   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:49:03.737935   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:49:03.737835   71515 retry.go:31] will retry after 798.751431ms: waiting for machine to come up
	I0528 21:49:04.537841   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:04.538347   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:49:04.538376   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:49:04.538297   71515 retry.go:31] will retry after 824.175848ms: waiting for machine to come up
	I0528 21:49:05.364585   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:05.365149   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:49:05.365181   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:49:05.365108   71515 retry.go:31] will retry after 1.08837711s: waiting for machine to come up
	I0528 21:49:06.454944   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:06.455392   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:49:06.455416   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:49:06.455358   71515 retry.go:31] will retry after 1.390917851s: waiting for machine to come up
	I0528 21:49:07.847768   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:07.848199   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:49:07.848227   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:49:07.848156   71515 retry.go:31] will retry after 1.642130184s: waiting for machine to come up
	I0528 21:49:09.491598   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:09.492137   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:49:09.492168   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:49:09.492089   71515 retry.go:31] will retry after 1.959426009s: waiting for machine to come up
	I0528 21:49:11.453602   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:11.454074   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:49:11.454100   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:49:11.454023   71515 retry.go:31] will retry after 2.772381437s: waiting for machine to come up
	I0528 21:49:14.227677   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:14.228007   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | unable to find current IP address of domain old-k8s-version-499466 in network mk-old-k8s-version-499466
	I0528 21:49:14.228039   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | I0528 21:49:14.227965   71515 retry.go:31] will retry after 2.782728525s: waiting for machine to come up
	I0528 21:49:17.012726   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.013117   70393 main.go:141] libmachine: (old-k8s-version-499466) Found IP for machine: 192.168.39.8
	I0528 21:49:17.013135   70393 main.go:141] libmachine: (old-k8s-version-499466) Reserving static IP address...
	I0528 21:49:17.013151   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has current primary IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.013567   70393 main.go:141] libmachine: (old-k8s-version-499466) Reserved static IP address: 192.168.39.8
	I0528 21:49:17.013592   70393 main.go:141] libmachine: (old-k8s-version-499466) Waiting for SSH to be available...
	I0528 21:49:17.013619   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "old-k8s-version-499466", mac: "52:54:00:04:bf:9b", ip: "192.168.39.8"} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:17.013641   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | skip adding static IP to network mk-old-k8s-version-499466 - found existing host DHCP lease matching {name: "old-k8s-version-499466", mac: "52:54:00:04:bf:9b", ip: "192.168.39.8"}
	I0528 21:49:17.013664   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | Getting to WaitForSSH function...
	I0528 21:49:17.015545   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.015871   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:17.015895   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.015996   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | Using SSH client type: external
	I0528 21:49:17.016023   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/id_rsa (-rw-------)
	I0528 21:49:17.016062   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 21:49:17.016077   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | About to run SSH command:
	I0528 21:49:17.016102   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | exit 0
	I0528 21:49:17.141599   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | SSH cmd err, output: <nil>: 
	I0528 21:49:17.141943   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetConfigRaw
	I0528 21:49:17.142523   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetIP
	I0528 21:49:17.145147   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.145537   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:17.145578   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.145836   70393 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/config.json ...
	I0528 21:49:17.146011   70393 machine.go:94] provisionDockerMachine start ...
	I0528 21:49:17.146027   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:49:17.146214   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:49:17.148224   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.148526   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:17.148554   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.148651   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:49:17.148828   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:17.148980   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:17.149122   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:49:17.149287   70393 main.go:141] libmachine: Using SSH client type: native
	I0528 21:49:17.149489   70393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0528 21:49:17.149500   70393 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:49:17.262264   70393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 21:49:17.262289   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetMachineName
	I0528 21:49:17.262534   70393 buildroot.go:166] provisioning hostname "old-k8s-version-499466"
	I0528 21:49:17.262556   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetMachineName
	I0528 21:49:17.262739   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:49:17.265078   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.265460   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:17.265485   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.265572   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:49:17.265744   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:17.265913   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:17.266061   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:49:17.266206   70393 main.go:141] libmachine: Using SSH client type: native
	I0528 21:49:17.266375   70393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0528 21:49:17.266388   70393 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-499466 && echo "old-k8s-version-499466" | sudo tee /etc/hostname
	I0528 21:49:17.393037   70393 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-499466
	
	I0528 21:49:17.393071   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:49:17.395749   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.396064   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:17.396096   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.396241   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:49:17.396465   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:17.396630   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:17.396752   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:49:17.396892   70393 main.go:141] libmachine: Using SSH client type: native
	I0528 21:49:17.397085   70393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0528 21:49:17.397102   70393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-499466' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-499466/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-499466' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:49:17.515583   70393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:49:17.515608   70393 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:49:17.515632   70393 buildroot.go:174] setting up certificates
	I0528 21:49:17.515641   70393 provision.go:84] configureAuth start
	I0528 21:49:17.515649   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetMachineName
	I0528 21:49:17.515906   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetIP
	I0528 21:49:17.518798   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.519184   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:17.519215   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.519365   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:49:17.521859   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.522209   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:17.522239   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.522373   70393 provision.go:143] copyHostCerts
	I0528 21:49:17.522427   70393 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:49:17.522440   70393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:49:17.522492   70393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:49:17.522578   70393 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:49:17.522585   70393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:49:17.522604   70393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:49:17.522655   70393 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:49:17.522662   70393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:49:17.522678   70393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:49:17.522763   70393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-499466 san=[127.0.0.1 192.168.39.8 localhost minikube old-k8s-version-499466]
	I0528 21:49:17.761848   70393 provision.go:177] copyRemoteCerts
	I0528 21:49:17.761901   70393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:49:17.761926   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:49:17.764685   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.765046   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:17.765065   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.765236   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:49:17.765423   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:17.765565   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:49:17.765683   70393 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/id_rsa Username:docker}
	I0528 21:49:17.847458   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:49:17.871601   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0528 21:49:17.896564   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 21:49:17.922497   70393 provision.go:87] duration metric: took 406.843035ms to configureAuth
	I0528 21:49:17.922527   70393 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:49:17.922732   70393 config.go:182] Loaded profile config "old-k8s-version-499466": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0528 21:49:17.922820   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:49:17.925287   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.925607   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:17.925639   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:17.925782   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:49:17.925990   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:17.926171   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:17.926314   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:49:17.926467   70393 main.go:141] libmachine: Using SSH client type: native
	I0528 21:49:17.926686   70393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0528 21:49:17.926712   70393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:49:18.193932   70393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:49:18.193964   70393 machine.go:97] duration metric: took 1.047941664s to provisionDockerMachine
	I0528 21:49:18.193975   70393 start.go:293] postStartSetup for "old-k8s-version-499466" (driver="kvm2")
	I0528 21:49:18.193985   70393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:49:18.193999   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:49:18.194313   70393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:49:18.194343   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:49:18.196830   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:18.197137   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:18.197160   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:18.197330   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:49:18.197525   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:18.197689   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:49:18.197844   70393 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/id_rsa Username:docker}
	I0528 21:49:18.288910   70393 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:49:18.293430   70393 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 21:49:18.293454   70393 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:49:18.293535   70393 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:49:18.293627   70393 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:49:18.293743   70393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:49:18.302876   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:49:18.326641   70393 start.go:296] duration metric: took 132.654341ms for postStartSetup
	I0528 21:49:18.326676   70393 fix.go:56] duration metric: took 18.007718349s for fixHost
	I0528 21:49:18.326695   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:49:18.329194   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:18.329506   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:18.329527   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:18.329676   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:49:18.329905   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:18.330101   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:18.330264   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:49:18.330428   70393 main.go:141] libmachine: Using SSH client type: native
	I0528 21:49:18.330601   70393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0528 21:49:18.330611   70393 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0528 21:49:18.442408   70393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716932958.400089281
	
	I0528 21:49:18.442440   70393 fix.go:216] guest clock: 1716932958.400089281
	I0528 21:49:18.442451   70393 fix.go:229] Guest: 2024-05-28 21:49:18.400089281 +0000 UTC Remote: 2024-05-28 21:49:18.326680277 +0000 UTC m=+248.847742944 (delta=73.409004ms)
	I0528 21:49:18.442480   70393 fix.go:200] guest clock delta is within tolerance: 73.409004ms
	I0528 21:49:18.442488   70393 start.go:83] releasing machines lock for "old-k8s-version-499466", held for 18.12356144s
	I0528 21:49:18.442527   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:49:18.442825   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetIP
	I0528 21:49:18.445340   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:18.445747   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:18.445795   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:18.445888   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:49:18.446367   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:49:18.446527   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .DriverName
	I0528 21:49:18.446580   70393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:49:18.446633   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:49:18.446752   70393 ssh_runner.go:195] Run: cat /version.json
	I0528 21:49:18.446777   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHHostname
	I0528 21:49:18.449212   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:18.449412   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:18.449590   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:18.449618   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:18.449725   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:49:18.449852   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:18.449871   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:18.449900   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:18.450012   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHPort
	I0528 21:49:18.450085   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:49:18.450150   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHKeyPath
	I0528 21:49:18.450219   70393 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/id_rsa Username:docker}
	I0528 21:49:18.450280   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetSSHUsername
	I0528 21:49:18.450419   70393 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/old-k8s-version-499466/id_rsa Username:docker}
	I0528 21:49:18.562755   70393 ssh_runner.go:195] Run: systemctl --version
	I0528 21:49:18.569662   70393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:49:18.720756   70393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 21:49:18.727362   70393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:49:18.727445   70393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:49:18.744665   70393 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 21:49:18.744693   70393 start.go:494] detecting cgroup driver to use...
	I0528 21:49:18.744765   70393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:49:18.762447   70393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:49:18.778244   70393 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:49:18.778317   70393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:49:18.792794   70393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:49:18.807567   70393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:49:18.932730   70393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:49:19.078844   70393 docker.go:233] disabling docker service ...
	I0528 21:49:19.078911   70393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:49:19.094586   70393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:49:19.107748   70393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:49:19.258255   70393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:49:19.386567   70393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:49:19.400227   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:49:19.420089   70393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0528 21:49:19.420147   70393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:49:19.430122   70393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:49:19.430179   70393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:49:19.443196   70393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:49:19.456357   70393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:49:19.467769   70393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:49:19.479251   70393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:49:19.489047   70393 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 21:49:19.489106   70393 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 21:49:19.502201   70393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:49:19.512092   70393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:49:19.637188   70393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:49:19.797611   70393 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:49:19.797702   70393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:49:19.802948   70393 start.go:562] Will wait 60s for crictl version
	I0528 21:49:19.803007   70393 ssh_runner.go:195] Run: which crictl
	I0528 21:49:19.806939   70393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:49:19.845785   70393 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 21:49:19.845863   70393 ssh_runner.go:195] Run: crio --version
	I0528 21:49:19.876288   70393 ssh_runner.go:195] Run: crio --version
	I0528 21:49:19.907701   70393 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0528 21:49:19.909270   70393 main.go:141] libmachine: (old-k8s-version-499466) Calling .GetIP
	I0528 21:49:19.912295   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:19.912760   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bf:9b", ip: ""} in network mk-old-k8s-version-499466: {Iface:virbr1 ExpiryTime:2024-05-28 22:49:11 +0000 UTC Type:0 Mac:52:54:00:04:bf:9b Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:old-k8s-version-499466 Clientid:01:52:54:00:04:bf:9b}
	I0528 21:49:19.912794   70393 main.go:141] libmachine: (old-k8s-version-499466) DBG | domain old-k8s-version-499466 has defined IP address 192.168.39.8 and MAC address 52:54:00:04:bf:9b in network mk-old-k8s-version-499466
	I0528 21:49:19.913101   70393 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 21:49:19.917932   70393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:49:19.933930   70393 kubeadm.go:877] updating cluster {Name:old-k8s-version-499466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-499466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:49:19.934092   70393 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 21:49:19.934156   70393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:49:19.996310   70393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0528 21:49:19.996376   70393 ssh_runner.go:195] Run: which lz4
	I0528 21:49:20.000522   70393 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0528 21:49:20.005043   70393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 21:49:20.005077   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0528 21:49:21.780949   70393 crio.go:462] duration metric: took 1.780454008s to copy over tarball
	I0528 21:49:21.781029   70393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 21:49:24.756789   70393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.975737688s)
	I0528 21:49:24.756814   70393 crio.go:469] duration metric: took 2.975838245s to extract the tarball
	I0528 21:49:24.756821   70393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 21:49:24.800453   70393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:49:24.836409   70393 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0528 21:49:24.836432   70393 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0528 21:49:24.836491   70393 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:49:24.836496   70393 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:49:24.836543   70393 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0528 21:49:24.836495   70393 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:49:24.836653   70393 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:49:24.836698   70393 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0528 21:49:24.836710   70393 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:49:24.836961   70393 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0528 21:49:24.838255   70393 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:49:24.838294   70393 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0528 21:49:24.838299   70393 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:49:24.838265   70393 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:49:24.838272   70393 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:49:24.838339   70393 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0528 21:49:24.838395   70393 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0528 21:49:24.838427   70393 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:49:25.000581   70393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0528 21:49:25.040306   70393 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0528 21:49:25.040366   70393 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0528 21:49:25.040423   70393 ssh_runner.go:195] Run: which crictl
	I0528 21:49:25.044879   70393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0528 21:49:25.051554   70393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0528 21:49:25.095909   70393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0528 21:49:25.096584   70393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0528 21:49:25.096653   70393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:49:25.123332   70393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:49:25.123982   70393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:49:25.138015   70393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:49:25.209046   70393 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0528 21:49:25.209094   70393 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0528 21:49:25.209140   70393 ssh_runner.go:195] Run: which crictl
	I0528 21:49:25.209243   70393 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0528 21:49:25.209269   70393 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0528 21:49:25.209303   70393 ssh_runner.go:195] Run: which crictl
	I0528 21:49:25.223293   70393 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0528 21:49:25.223340   70393 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:49:25.223395   70393 ssh_runner.go:195] Run: which crictl
	I0528 21:49:25.257551   70393 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0528 21:49:25.257596   70393 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:49:25.257640   70393 ssh_runner.go:195] Run: which crictl
	I0528 21:49:25.262160   70393 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0528 21:49:25.262209   70393 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:49:25.262274   70393 ssh_runner.go:195] Run: which crictl
	I0528 21:49:25.262289   70393 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0528 21:49:25.262320   70393 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:49:25.262333   70393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0528 21:49:25.262340   70393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0528 21:49:25.262358   70393 ssh_runner.go:195] Run: which crictl
	I0528 21:49:25.262383   70393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0528 21:49:25.265902   70393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0528 21:49:25.273321   70393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0528 21:49:25.333948   70393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0528 21:49:25.385928   70393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0528 21:49:25.385983   70393 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0528 21:49:25.386012   70393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0528 21:49:25.385991   70393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0528 21:49:25.386067   70393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0528 21:49:25.429587   70393 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0528 21:49:25.828458   70393 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:49:25.977164   70393 cache_images.go:92] duration metric: took 1.140717713s to LoadCachedImages
	W0528 21:49:25.977258   70393 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18966-3963/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0528 21:49:25.977276   70393 kubeadm.go:928] updating node { 192.168.39.8 8443 v1.20.0 crio true true} ...
	I0528 21:49:25.977429   70393 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-499466 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-499466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:49:25.977515   70393 ssh_runner.go:195] Run: crio config
	I0528 21:49:26.024784   70393 cni.go:84] Creating CNI manager for ""
	I0528 21:49:26.024807   70393 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:49:26.024815   70393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:49:26.024833   70393 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.8 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-499466 NodeName:old-k8s-version-499466 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0528 21:49:26.024949   70393 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-499466"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:49:26.025004   70393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0528 21:49:26.038964   70393 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:49:26.039025   70393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:49:26.050425   70393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0528 21:49:26.068110   70393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:49:26.084644   70393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0528 21:49:26.102603   70393 ssh_runner.go:195] Run: grep 192.168.39.8	control-plane.minikube.internal$ /etc/hosts
	I0528 21:49:26.106639   70393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:49:26.120021   70393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:49:26.244388   70393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:49:26.261329   70393 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466 for IP: 192.168.39.8
	I0528 21:49:26.261355   70393 certs.go:194] generating shared ca certs ...
	I0528 21:49:26.261375   70393 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:49:26.261577   70393 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 21:49:26.261630   70393 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 21:49:26.261643   70393 certs.go:256] generating profile certs ...
	I0528 21:49:26.261748   70393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/client.key
	I0528 21:49:26.261844   70393 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.key.2337190f
	I0528 21:49:26.261904   70393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/proxy-client.key
	I0528 21:49:26.262064   70393 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 21:49:26.262102   70393 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 21:49:26.262115   70393 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 21:49:26.262148   70393 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 21:49:26.262180   70393 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:49:26.262208   70393 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 21:49:26.262260   70393 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:49:26.263095   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:49:26.304241   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:49:26.334661   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:49:26.362140   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:49:26.420578   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0528 21:49:26.470107   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 21:49:26.506792   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:49:26.544368   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 21:49:26.569344   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 21:49:26.599043   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:49:26.628265   70393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 21:49:26.658871   70393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:49:26.676177   70393 ssh_runner.go:195] Run: openssl version
	I0528 21:49:26.682230   70393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 21:49:26.692846   70393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 21:49:26.697291   70393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:49:26.697338   70393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 21:49:26.703144   70393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 21:49:26.717469   70393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:49:26.728451   70393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:49:26.734506   70393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:49:26.734560   70393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:49:26.741848   70393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:49:26.753356   70393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 21:49:26.764211   70393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 21:49:26.768771   70393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:49:26.768839   70393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 21:49:26.774416   70393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 21:49:26.785237   70393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:49:26.790013   70393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 21:49:26.796050   70393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 21:49:26.801691   70393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 21:49:26.808191   70393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 21:49:26.814034   70393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 21:49:26.820396   70393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 21:49:26.826423   70393 kubeadm.go:391] StartCluster: {Name:old-k8s-version-499466 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-499466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:49:26.826496   70393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:49:26.826543   70393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:49:26.873732   70393 cri.go:89] found id: ""
	I0528 21:49:26.873822   70393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0528 21:49:26.884521   70393 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 21:49:26.884539   70393 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 21:49:26.884544   70393 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 21:49:26.884582   70393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 21:49:26.894943   70393 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:49:26.896020   70393 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-499466" does not appear in /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:49:26.896781   70393 kubeconfig.go:62] /home/jenkins/minikube-integration/18966-3963/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-499466" cluster setting kubeconfig missing "old-k8s-version-499466" context setting]
	I0528 21:49:26.897891   70393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:49:26.963112   70393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 21:49:26.973849   70393 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.8
	I0528 21:49:26.973883   70393 kubeadm.go:1154] stopping kube-system containers ...
	I0528 21:49:26.973896   70393 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0528 21:49:26.973948   70393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:49:27.012772   70393 cri.go:89] found id: ""
	I0528 21:49:27.012855   70393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 21:49:27.029579   70393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:49:27.040792   70393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:49:27.040813   70393 kubeadm.go:156] found existing configuration files:
	
	I0528 21:49:27.040860   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:49:27.050799   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:49:27.050857   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:49:27.059972   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:49:27.069727   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:49:27.069808   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:49:27.079902   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:49:27.089390   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:49:27.089443   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:49:27.099779   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:49:27.111027   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:49:27.111095   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:49:27.122195   70393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:49:27.132022   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:49:27.277141   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:49:27.950198   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:49:28.247455   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:49:28.435044   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:49:28.508547   70393 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:49:28.508638   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:29.009083   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:29.509019   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:30.009680   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:30.509168   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:31.009520   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:31.509508   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:32.009559   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:32.509737   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:33.008718   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:33.509081   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:34.008778   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:34.509277   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:35.009703   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:35.509416   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:36.009708   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:36.509085   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:37.009491   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:37.509350   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:38.009673   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:38.509107   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:39.009268   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:39.509547   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:40.009500   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:40.508739   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:41.008667   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:41.508873   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:42.008806   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:42.508966   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:43.009737   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:43.508757   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:44.009166   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:44.509339   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:45.009071   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:45.509096   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:46.009717   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:46.509252   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:47.009397   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:47.508750   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:48.009594   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:48.509052   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:49.009687   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:49.509740   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:50.009696   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:50.509442   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:51.009126   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:51.509714   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:52.009423   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:52.509469   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:53.009357   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:53.509569   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:54.008909   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:54.508728   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:55.008782   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:55.509636   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:56.008936   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:56.509052   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:57.008867   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:57.508992   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:58.009398   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:58.509394   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:59.008777   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:49:59.509619   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:00.009436   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:00.508778   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:01.009149   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:01.508981   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:02.009440   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:02.508942   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:03.009182   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:03.509401   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:04.009627   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:04.508838   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:05.009458   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:05.509489   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:06.009444   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:06.508777   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:07.009129   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:07.509262   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:08.009496   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:08.508811   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:09.008852   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:09.509253   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:10.008822   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:10.508921   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:11.008917   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:11.508951   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:12.009623   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:12.509271   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:13.009552   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:13.509169   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:14.008784   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:14.508825   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:15.009594   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:15.509555   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:16.009683   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:16.509384   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:17.008941   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:17.508906   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:18.008798   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:18.508726   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:19.009675   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:19.509196   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:20.009371   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:20.509087   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:21.008662   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:21.508790   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:22.008745   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:22.509029   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:23.008893   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:23.508753   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:24.009310   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:24.509490   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:25.008777   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:25.509325   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:26.008837   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:26.508826   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:27.008906   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:27.509442   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:28.009146   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:28.509306   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:50:28.509391   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:50:28.555001   70393 cri.go:89] found id: ""
	I0528 21:50:28.555027   70393 logs.go:276] 0 containers: []
	W0528 21:50:28.555038   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:50:28.555046   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:50:28.555107   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:50:28.599346   70393 cri.go:89] found id: ""
	I0528 21:50:28.599373   70393 logs.go:276] 0 containers: []
	W0528 21:50:28.599384   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:50:28.599391   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:50:28.599452   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:50:28.643879   70393 cri.go:89] found id: ""
	I0528 21:50:28.643905   70393 logs.go:276] 0 containers: []
	W0528 21:50:28.643916   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:50:28.643924   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:50:28.643994   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:50:28.680088   70393 cri.go:89] found id: ""
	I0528 21:50:28.680118   70393 logs.go:276] 0 containers: []
	W0528 21:50:28.680130   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:50:28.680137   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:50:28.680195   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:50:28.719540   70393 cri.go:89] found id: ""
	I0528 21:50:28.719563   70393 logs.go:276] 0 containers: []
	W0528 21:50:28.719570   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:50:28.719576   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:50:28.719627   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:50:28.759117   70393 cri.go:89] found id: ""
	I0528 21:50:28.759146   70393 logs.go:276] 0 containers: []
	W0528 21:50:28.759158   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:50:28.759165   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:50:28.759232   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:50:28.799815   70393 cri.go:89] found id: ""
	I0528 21:50:28.799837   70393 logs.go:276] 0 containers: []
	W0528 21:50:28.799844   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:50:28.799849   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:50:28.799897   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:50:28.849974   70393 cri.go:89] found id: ""
	I0528 21:50:28.850000   70393 logs.go:276] 0 containers: []
	W0528 21:50:28.850010   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:50:28.850020   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:50:28.850041   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:50:29.001935   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:50:29.001966   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:50:29.001992   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:50:29.067971   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:50:29.068001   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:50:29.119981   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:50:29.120010   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:50:29.175239   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:50:29.175280   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:50:31.694075   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:31.708267   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:50:31.708324   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:50:31.748513   70393 cri.go:89] found id: ""
	I0528 21:50:31.748538   70393 logs.go:276] 0 containers: []
	W0528 21:50:31.748546   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:50:31.748552   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:50:31.748600   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:50:31.803969   70393 cri.go:89] found id: ""
	I0528 21:50:31.803995   70393 logs.go:276] 0 containers: []
	W0528 21:50:31.804005   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:50:31.804013   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:50:31.804072   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:50:31.853903   70393 cri.go:89] found id: ""
	I0528 21:50:31.853935   70393 logs.go:276] 0 containers: []
	W0528 21:50:31.853946   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:50:31.853953   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:50:31.854015   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:50:31.898767   70393 cri.go:89] found id: ""
	I0528 21:50:31.898798   70393 logs.go:276] 0 containers: []
	W0528 21:50:31.898806   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:50:31.898812   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:50:31.898878   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:50:31.945125   70393 cri.go:89] found id: ""
	I0528 21:50:31.945154   70393 logs.go:276] 0 containers: []
	W0528 21:50:31.945165   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:50:31.945172   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:50:31.945242   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:50:31.981174   70393 cri.go:89] found id: ""
	I0528 21:50:31.981215   70393 logs.go:276] 0 containers: []
	W0528 21:50:31.981232   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:50:31.981242   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:50:31.981325   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:50:32.021975   70393 cri.go:89] found id: ""
	I0528 21:50:32.022017   70393 logs.go:276] 0 containers: []
	W0528 21:50:32.022030   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:50:32.022038   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:50:32.022100   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:50:32.064058   70393 cri.go:89] found id: ""
	I0528 21:50:32.064087   70393 logs.go:276] 0 containers: []
	W0528 21:50:32.064098   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:50:32.064108   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:50:32.064121   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:50:32.114154   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:50:32.114187   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:50:32.131047   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:50:32.131074   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:50:32.205596   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:50:32.205617   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:50:32.205630   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:50:32.275061   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:50:32.275098   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:50:34.820864   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:34.833631   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:50:34.833710   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:50:34.867528   70393 cri.go:89] found id: ""
	I0528 21:50:34.867556   70393 logs.go:276] 0 containers: []
	W0528 21:50:34.867566   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:50:34.867574   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:50:34.867635   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:50:34.906827   70393 cri.go:89] found id: ""
	I0528 21:50:34.906858   70393 logs.go:276] 0 containers: []
	W0528 21:50:34.906869   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:50:34.906876   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:50:34.906962   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:50:34.950103   70393 cri.go:89] found id: ""
	I0528 21:50:34.950136   70393 logs.go:276] 0 containers: []
	W0528 21:50:34.950147   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:50:34.950155   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:50:34.950220   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:50:34.988634   70393 cri.go:89] found id: ""
	I0528 21:50:34.988664   70393 logs.go:276] 0 containers: []
	W0528 21:50:34.988674   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:50:34.988685   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:50:34.988737   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:50:35.026675   70393 cri.go:89] found id: ""
	I0528 21:50:35.026701   70393 logs.go:276] 0 containers: []
	W0528 21:50:35.026711   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:50:35.026719   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:50:35.026784   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:50:35.068389   70393 cri.go:89] found id: ""
	I0528 21:50:35.068413   70393 logs.go:276] 0 containers: []
	W0528 21:50:35.068421   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:50:35.068428   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:50:35.068483   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:50:35.110737   70393 cri.go:89] found id: ""
	I0528 21:50:35.110761   70393 logs.go:276] 0 containers: []
	W0528 21:50:35.110768   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:50:35.110787   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:50:35.110837   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:50:35.150239   70393 cri.go:89] found id: ""
	I0528 21:50:35.150267   70393 logs.go:276] 0 containers: []
	W0528 21:50:35.150275   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:50:35.150284   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:50:35.150298   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:50:35.207067   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:50:35.207103   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:50:35.223977   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:50:35.224010   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:50:35.299724   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:50:35.299748   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:50:35.299761   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:50:35.376618   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:50:35.376653   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:50:37.916806   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:37.930899   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:50:37.931016   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:50:37.972166   70393 cri.go:89] found id: ""
	I0528 21:50:37.972201   70393 logs.go:276] 0 containers: []
	W0528 21:50:37.972212   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:50:37.972220   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:50:37.972339   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:50:38.008033   70393 cri.go:89] found id: ""
	I0528 21:50:38.008056   70393 logs.go:276] 0 containers: []
	W0528 21:50:38.008064   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:50:38.008069   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:50:38.008120   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:50:38.045394   70393 cri.go:89] found id: ""
	I0528 21:50:38.045425   70393 logs.go:276] 0 containers: []
	W0528 21:50:38.045435   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:50:38.045444   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:50:38.045511   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:50:38.088048   70393 cri.go:89] found id: ""
	I0528 21:50:38.088076   70393 logs.go:276] 0 containers: []
	W0528 21:50:38.088087   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:50:38.088093   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:50:38.088138   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:50:38.127352   70393 cri.go:89] found id: ""
	I0528 21:50:38.127382   70393 logs.go:276] 0 containers: []
	W0528 21:50:38.127392   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:50:38.127399   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:50:38.127458   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:50:38.168977   70393 cri.go:89] found id: ""
	I0528 21:50:38.169001   70393 logs.go:276] 0 containers: []
	W0528 21:50:38.169008   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:50:38.169014   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:50:38.169066   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:50:38.206573   70393 cri.go:89] found id: ""
	I0528 21:50:38.206601   70393 logs.go:276] 0 containers: []
	W0528 21:50:38.206612   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:50:38.206620   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:50:38.206680   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:50:38.241729   70393 cri.go:89] found id: ""
	I0528 21:50:38.241755   70393 logs.go:276] 0 containers: []
	W0528 21:50:38.241783   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:50:38.241794   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:50:38.241810   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:50:38.295716   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:50:38.295750   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:50:38.310550   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:50:38.310576   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:50:38.386173   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:50:38.386194   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:50:38.386205   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:50:38.469467   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:50:38.469504   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:50:41.038139   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:41.051451   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:50:41.051512   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:50:41.087789   70393 cri.go:89] found id: ""
	I0528 21:50:41.087815   70393 logs.go:276] 0 containers: []
	W0528 21:50:41.087823   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:50:41.087829   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:50:41.087874   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:50:41.124602   70393 cri.go:89] found id: ""
	I0528 21:50:41.124631   70393 logs.go:276] 0 containers: []
	W0528 21:50:41.124642   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:50:41.124650   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:50:41.124712   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:50:41.161081   70393 cri.go:89] found id: ""
	I0528 21:50:41.161105   70393 logs.go:276] 0 containers: []
	W0528 21:50:41.161113   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:50:41.161118   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:50:41.161163   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:50:41.196679   70393 cri.go:89] found id: ""
	I0528 21:50:41.196705   70393 logs.go:276] 0 containers: []
	W0528 21:50:41.196715   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:50:41.196723   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:50:41.196784   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:50:41.235222   70393 cri.go:89] found id: ""
	I0528 21:50:41.235243   70393 logs.go:276] 0 containers: []
	W0528 21:50:41.235250   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:50:41.235256   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:50:41.235302   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:50:41.275299   70393 cri.go:89] found id: ""
	I0528 21:50:41.275324   70393 logs.go:276] 0 containers: []
	W0528 21:50:41.275332   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:50:41.275338   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:50:41.275383   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:50:41.311963   70393 cri.go:89] found id: ""
	I0528 21:50:41.311993   70393 logs.go:276] 0 containers: []
	W0528 21:50:41.312004   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:50:41.312012   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:50:41.312065   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:50:41.350607   70393 cri.go:89] found id: ""
	I0528 21:50:41.350640   70393 logs.go:276] 0 containers: []
	W0528 21:50:41.350650   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:50:41.350660   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:50:41.350674   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:50:41.440629   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:50:41.440669   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:50:41.481487   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:50:41.481511   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:50:41.538254   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:50:41.538292   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:50:41.552880   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:50:41.552913   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:50:41.628352   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:50:44.129442   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:44.145098   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:50:44.145170   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:50:44.196458   70393 cri.go:89] found id: ""
	I0528 21:50:44.196483   70393 logs.go:276] 0 containers: []
	W0528 21:50:44.196490   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:50:44.196496   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:50:44.196551   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:50:44.237182   70393 cri.go:89] found id: ""
	I0528 21:50:44.237206   70393 logs.go:276] 0 containers: []
	W0528 21:50:44.237214   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:50:44.237221   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:50:44.237283   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:50:44.286452   70393 cri.go:89] found id: ""
	I0528 21:50:44.286476   70393 logs.go:276] 0 containers: []
	W0528 21:50:44.286486   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:50:44.286493   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:50:44.286547   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:50:44.357547   70393 cri.go:89] found id: ""
	I0528 21:50:44.357573   70393 logs.go:276] 0 containers: []
	W0528 21:50:44.357583   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:50:44.357590   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:50:44.357650   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:50:44.401024   70393 cri.go:89] found id: ""
	I0528 21:50:44.401057   70393 logs.go:276] 0 containers: []
	W0528 21:50:44.401069   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:50:44.401076   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:50:44.401142   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:50:44.442425   70393 cri.go:89] found id: ""
	I0528 21:50:44.442448   70393 logs.go:276] 0 containers: []
	W0528 21:50:44.442455   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:50:44.442462   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:50:44.442507   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:50:44.480264   70393 cri.go:89] found id: ""
	I0528 21:50:44.480294   70393 logs.go:276] 0 containers: []
	W0528 21:50:44.480305   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:50:44.480312   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:50:44.480374   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:50:44.517728   70393 cri.go:89] found id: ""
	I0528 21:50:44.517781   70393 logs.go:276] 0 containers: []
	W0528 21:50:44.517793   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:50:44.517805   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:50:44.517818   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:50:44.571629   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:50:44.571668   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:50:44.586465   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:50:44.586495   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:50:44.661244   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:50:44.661265   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:50:44.661283   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:50:44.738860   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:50:44.738902   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:50:47.283858   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:47.297333   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:50:47.297390   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:50:47.334384   70393 cri.go:89] found id: ""
	I0528 21:50:47.334411   70393 logs.go:276] 0 containers: []
	W0528 21:50:47.334422   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:50:47.334430   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:50:47.334491   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:50:47.369589   70393 cri.go:89] found id: ""
	I0528 21:50:47.369626   70393 logs.go:276] 0 containers: []
	W0528 21:50:47.369652   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:50:47.369664   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:50:47.369730   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:50:47.407366   70393 cri.go:89] found id: ""
	I0528 21:50:47.407392   70393 logs.go:276] 0 containers: []
	W0528 21:50:47.407411   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:50:47.407420   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:50:47.407507   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:50:47.443014   70393 cri.go:89] found id: ""
	I0528 21:50:47.443039   70393 logs.go:276] 0 containers: []
	W0528 21:50:47.443047   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:50:47.443052   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:50:47.443099   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:50:47.479603   70393 cri.go:89] found id: ""
	I0528 21:50:47.479628   70393 logs.go:276] 0 containers: []
	W0528 21:50:47.479636   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:50:47.479644   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:50:47.479706   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:50:47.515800   70393 cri.go:89] found id: ""
	I0528 21:50:47.515841   70393 logs.go:276] 0 containers: []
	W0528 21:50:47.515852   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:50:47.515861   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:50:47.515923   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:50:47.553341   70393 cri.go:89] found id: ""
	I0528 21:50:47.553365   70393 logs.go:276] 0 containers: []
	W0528 21:50:47.553376   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:50:47.553382   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:50:47.553429   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:50:47.589404   70393 cri.go:89] found id: ""
	I0528 21:50:47.589445   70393 logs.go:276] 0 containers: []
	W0528 21:50:47.589457   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:50:47.589468   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:50:47.589486   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:50:47.604586   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:50:47.604614   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:50:47.674766   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:50:47.674802   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:50:47.674822   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:50:47.757418   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:50:47.757447   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:50:47.802873   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:50:47.802905   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:50:50.352889   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:50.366254   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:50:50.366315   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:50:50.402408   70393 cri.go:89] found id: ""
	I0528 21:50:50.402435   70393 logs.go:276] 0 containers: []
	W0528 21:50:50.402446   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:50:50.402454   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:50:50.402517   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:50:50.436029   70393 cri.go:89] found id: ""
	I0528 21:50:50.436058   70393 logs.go:276] 0 containers: []
	W0528 21:50:50.436067   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:50:50.436074   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:50:50.436180   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:50:50.471879   70393 cri.go:89] found id: ""
	I0528 21:50:50.471902   70393 logs.go:276] 0 containers: []
	W0528 21:50:50.471910   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:50:50.471948   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:50:50.472001   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:50:50.514714   70393 cri.go:89] found id: ""
	I0528 21:50:50.514742   70393 logs.go:276] 0 containers: []
	W0528 21:50:50.514750   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:50:50.514755   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:50:50.514801   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:50:50.552745   70393 cri.go:89] found id: ""
	I0528 21:50:50.552770   70393 logs.go:276] 0 containers: []
	W0528 21:50:50.552777   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:50:50.552783   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:50:50.552826   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:50:50.591176   70393 cri.go:89] found id: ""
	I0528 21:50:50.591202   70393 logs.go:276] 0 containers: []
	W0528 21:50:50.591214   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:50:50.591230   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:50:50.591300   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:50:50.631132   70393 cri.go:89] found id: ""
	I0528 21:50:50.631167   70393 logs.go:276] 0 containers: []
	W0528 21:50:50.631176   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:50:50.631183   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:50:50.631239   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:50:50.668350   70393 cri.go:89] found id: ""
	I0528 21:50:50.668380   70393 logs.go:276] 0 containers: []
	W0528 21:50:50.668390   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:50:50.668401   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:50:50.668417   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:50:50.681337   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:50:50.681359   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:50:50.767583   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:50:50.767606   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:50:50.767621   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:50:50.844718   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:50:50.844759   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:50:50.880819   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:50:50.880848   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:50:53.435646   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:53.449446   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:50:53.449529   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:50:53.485185   70393 cri.go:89] found id: ""
	I0528 21:50:53.485206   70393 logs.go:276] 0 containers: []
	W0528 21:50:53.485216   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:50:53.485221   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:50:53.485268   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:50:53.520760   70393 cri.go:89] found id: ""
	I0528 21:50:53.520785   70393 logs.go:276] 0 containers: []
	W0528 21:50:53.520793   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:50:53.520798   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:50:53.520848   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:50:53.557512   70393 cri.go:89] found id: ""
	I0528 21:50:53.557541   70393 logs.go:276] 0 containers: []
	W0528 21:50:53.557552   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:50:53.557559   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:50:53.557612   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:50:53.593557   70393 cri.go:89] found id: ""
	I0528 21:50:53.593583   70393 logs.go:276] 0 containers: []
	W0528 21:50:53.593592   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:50:53.593598   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:50:53.593643   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:50:53.628412   70393 cri.go:89] found id: ""
	I0528 21:50:53.628437   70393 logs.go:276] 0 containers: []
	W0528 21:50:53.628446   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:50:53.628453   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:50:53.628510   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:50:53.660088   70393 cri.go:89] found id: ""
	I0528 21:50:53.660114   70393 logs.go:276] 0 containers: []
	W0528 21:50:53.660123   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:50:53.660130   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:50:53.660188   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:50:53.698597   70393 cri.go:89] found id: ""
	I0528 21:50:53.698626   70393 logs.go:276] 0 containers: []
	W0528 21:50:53.698636   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:50:53.698644   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:50:53.698702   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:50:53.733164   70393 cri.go:89] found id: ""
	I0528 21:50:53.733192   70393 logs.go:276] 0 containers: []
	W0528 21:50:53.733203   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:50:53.733212   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:50:53.733228   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:50:53.784674   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:50:53.784703   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:50:53.798303   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:50:53.798328   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:50:53.869150   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:50:53.869177   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:50:53.869193   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:50:53.947896   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:50:53.947930   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:50:56.490837   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:56.504469   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:50:56.504551   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:50:56.541621   70393 cri.go:89] found id: ""
	I0528 21:50:56.541660   70393 logs.go:276] 0 containers: []
	W0528 21:50:56.541669   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:50:56.541677   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:50:56.541730   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:50:56.578388   70393 cri.go:89] found id: ""
	I0528 21:50:56.578414   70393 logs.go:276] 0 containers: []
	W0528 21:50:56.578425   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:50:56.578432   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:50:56.578489   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:50:56.616852   70393 cri.go:89] found id: ""
	I0528 21:50:56.616883   70393 logs.go:276] 0 containers: []
	W0528 21:50:56.616892   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:50:56.616900   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:50:56.616954   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:50:56.651396   70393 cri.go:89] found id: ""
	I0528 21:50:56.651423   70393 logs.go:276] 0 containers: []
	W0528 21:50:56.651431   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:50:56.651437   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:50:56.651485   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:50:56.685777   70393 cri.go:89] found id: ""
	I0528 21:50:56.685802   70393 logs.go:276] 0 containers: []
	W0528 21:50:56.685811   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:50:56.685818   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:50:56.685877   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:50:56.719466   70393 cri.go:89] found id: ""
	I0528 21:50:56.719491   70393 logs.go:276] 0 containers: []
	W0528 21:50:56.719500   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:50:56.719505   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:50:56.719565   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:50:56.756539   70393 cri.go:89] found id: ""
	I0528 21:50:56.756558   70393 logs.go:276] 0 containers: []
	W0528 21:50:56.756566   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:50:56.756571   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:50:56.756642   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:50:56.793797   70393 cri.go:89] found id: ""
	I0528 21:50:56.793823   70393 logs.go:276] 0 containers: []
	W0528 21:50:56.793830   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:50:56.793837   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:50:56.793848   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:50:56.883020   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:50:56.883056   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:50:56.932453   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:50:56.932480   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:50:56.995415   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:50:56.995445   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:50:57.011658   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:50:57.011691   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:50:57.098755   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:50:59.599754   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:50:59.613301   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:50:59.613358   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:50:59.654198   70393 cri.go:89] found id: ""
	I0528 21:50:59.654220   70393 logs.go:276] 0 containers: []
	W0528 21:50:59.654230   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:50:59.654236   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:50:59.654295   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:50:59.689342   70393 cri.go:89] found id: ""
	I0528 21:50:59.689371   70393 logs.go:276] 0 containers: []
	W0528 21:50:59.689383   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:50:59.689391   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:50:59.689456   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:50:59.723780   70393 cri.go:89] found id: ""
	I0528 21:50:59.723809   70393 logs.go:276] 0 containers: []
	W0528 21:50:59.723817   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:50:59.723823   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:50:59.723872   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:50:59.759228   70393 cri.go:89] found id: ""
	I0528 21:50:59.759267   70393 logs.go:276] 0 containers: []
	W0528 21:50:59.759277   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:50:59.759285   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:50:59.759344   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:50:59.793982   70393 cri.go:89] found id: ""
	I0528 21:50:59.794009   70393 logs.go:276] 0 containers: []
	W0528 21:50:59.794017   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:50:59.794023   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:50:59.794077   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:50:59.832837   70393 cri.go:89] found id: ""
	I0528 21:50:59.832866   70393 logs.go:276] 0 containers: []
	W0528 21:50:59.832874   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:50:59.832880   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:50:59.832935   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:50:59.868350   70393 cri.go:89] found id: ""
	I0528 21:50:59.868370   70393 logs.go:276] 0 containers: []
	W0528 21:50:59.868377   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:50:59.868383   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:50:59.868443   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:50:59.905307   70393 cri.go:89] found id: ""
	I0528 21:50:59.905335   70393 logs.go:276] 0 containers: []
	W0528 21:50:59.905346   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:50:59.905357   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:50:59.905371   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:50:59.955456   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:50:59.955486   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:50:59.969616   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:50:59.969643   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:00.047588   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:00.047610   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:00.047625   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:00.162224   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:00.162258   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:02.710918   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:02.725402   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:02.725473   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:02.761050   70393 cri.go:89] found id: ""
	I0528 21:51:02.761074   70393 logs.go:276] 0 containers: []
	W0528 21:51:02.761084   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:02.761095   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:02.761161   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:02.797916   70393 cri.go:89] found id: ""
	I0528 21:51:02.797942   70393 logs.go:276] 0 containers: []
	W0528 21:51:02.797951   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:02.797957   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:02.798022   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:02.833085   70393 cri.go:89] found id: ""
	I0528 21:51:02.833108   70393 logs.go:276] 0 containers: []
	W0528 21:51:02.833116   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:02.833121   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:02.833173   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:02.870935   70393 cri.go:89] found id: ""
	I0528 21:51:02.870957   70393 logs.go:276] 0 containers: []
	W0528 21:51:02.870969   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:02.870977   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:02.871025   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:02.906358   70393 cri.go:89] found id: ""
	I0528 21:51:02.906389   70393 logs.go:276] 0 containers: []
	W0528 21:51:02.906399   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:02.906406   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:02.906459   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:02.942636   70393 cri.go:89] found id: ""
	I0528 21:51:02.942666   70393 logs.go:276] 0 containers: []
	W0528 21:51:02.942674   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:02.942679   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:02.942725   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:02.987577   70393 cri.go:89] found id: ""
	I0528 21:51:02.987604   70393 logs.go:276] 0 containers: []
	W0528 21:51:02.987612   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:02.987618   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:02.987678   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:03.030531   70393 cri.go:89] found id: ""
	I0528 21:51:03.030554   70393 logs.go:276] 0 containers: []
	W0528 21:51:03.030564   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:03.030573   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:03.030584   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:03.087024   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:03.087061   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:03.101168   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:03.101196   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:03.179266   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:03.179292   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:03.179311   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:03.256507   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:03.256538   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:05.802335   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:05.816282   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:05.816357   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:05.852372   70393 cri.go:89] found id: ""
	I0528 21:51:05.852396   70393 logs.go:276] 0 containers: []
	W0528 21:51:05.852405   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:05.852413   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:05.852473   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:05.889191   70393 cri.go:89] found id: ""
	I0528 21:51:05.889218   70393 logs.go:276] 0 containers: []
	W0528 21:51:05.889228   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:05.889234   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:05.889284   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:05.924683   70393 cri.go:89] found id: ""
	I0528 21:51:05.924710   70393 logs.go:276] 0 containers: []
	W0528 21:51:05.924717   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:05.924723   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:05.924772   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:05.959186   70393 cri.go:89] found id: ""
	I0528 21:51:05.959219   70393 logs.go:276] 0 containers: []
	W0528 21:51:05.959229   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:05.959237   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:05.959298   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:05.999621   70393 cri.go:89] found id: ""
	I0528 21:51:05.999655   70393 logs.go:276] 0 containers: []
	W0528 21:51:05.999663   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:05.999669   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:05.999726   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:06.040679   70393 cri.go:89] found id: ""
	I0528 21:51:06.040708   70393 logs.go:276] 0 containers: []
	W0528 21:51:06.040722   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:06.040730   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:06.040778   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:06.088298   70393 cri.go:89] found id: ""
	I0528 21:51:06.088321   70393 logs.go:276] 0 containers: []
	W0528 21:51:06.088327   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:06.088333   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:06.088391   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:06.129392   70393 cri.go:89] found id: ""
	I0528 21:51:06.129416   70393 logs.go:276] 0 containers: []
	W0528 21:51:06.129424   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:06.129431   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:06.129445   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:06.144037   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:06.144064   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:06.224061   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:06.224079   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:06.224091   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:06.302254   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:06.302288   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:06.343419   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:06.343445   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:08.895084   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:08.910476   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:08.910549   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:08.956686   70393 cri.go:89] found id: ""
	I0528 21:51:08.956712   70393 logs.go:276] 0 containers: []
	W0528 21:51:08.956721   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:08.956729   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:08.956786   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:09.000406   70393 cri.go:89] found id: ""
	I0528 21:51:09.000435   70393 logs.go:276] 0 containers: []
	W0528 21:51:09.000444   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:09.000452   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:09.000511   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:09.043466   70393 cri.go:89] found id: ""
	I0528 21:51:09.043499   70393 logs.go:276] 0 containers: []
	W0528 21:51:09.043510   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:09.043516   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:09.043573   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:09.081379   70393 cri.go:89] found id: ""
	I0528 21:51:09.081406   70393 logs.go:276] 0 containers: []
	W0528 21:51:09.081416   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:09.081428   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:09.081483   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:09.123548   70393 cri.go:89] found id: ""
	I0528 21:51:09.123572   70393 logs.go:276] 0 containers: []
	W0528 21:51:09.123581   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:09.123589   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:09.123644   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:09.166678   70393 cri.go:89] found id: ""
	I0528 21:51:09.166705   70393 logs.go:276] 0 containers: []
	W0528 21:51:09.166716   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:09.166724   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:09.166775   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:09.206117   70393 cri.go:89] found id: ""
	I0528 21:51:09.206140   70393 logs.go:276] 0 containers: []
	W0528 21:51:09.206150   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:09.206157   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:09.206217   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:09.248366   70393 cri.go:89] found id: ""
	I0528 21:51:09.248388   70393 logs.go:276] 0 containers: []
	W0528 21:51:09.248396   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:09.248403   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:09.248416   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:09.326896   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:09.326925   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:09.326941   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:09.445527   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:09.445564   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:09.490722   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:09.490749   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:09.546393   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:09.546424   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:12.060923   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:12.074257   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:12.074318   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:12.110595   70393 cri.go:89] found id: ""
	I0528 21:51:12.110624   70393 logs.go:276] 0 containers: []
	W0528 21:51:12.110632   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:12.110637   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:12.110689   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:12.149637   70393 cri.go:89] found id: ""
	I0528 21:51:12.149659   70393 logs.go:276] 0 containers: []
	W0528 21:51:12.149666   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:12.149671   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:12.149726   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:12.190652   70393 cri.go:89] found id: ""
	I0528 21:51:12.190673   70393 logs.go:276] 0 containers: []
	W0528 21:51:12.190680   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:12.190685   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:12.190729   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:12.231150   70393 cri.go:89] found id: ""
	I0528 21:51:12.231192   70393 logs.go:276] 0 containers: []
	W0528 21:51:12.231203   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:12.231211   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:12.231275   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:12.271365   70393 cri.go:89] found id: ""
	I0528 21:51:12.271393   70393 logs.go:276] 0 containers: []
	W0528 21:51:12.271403   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:12.271410   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:12.271465   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:12.311691   70393 cri.go:89] found id: ""
	I0528 21:51:12.311717   70393 logs.go:276] 0 containers: []
	W0528 21:51:12.311727   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:12.311735   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:12.311800   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:12.349284   70393 cri.go:89] found id: ""
	I0528 21:51:12.349315   70393 logs.go:276] 0 containers: []
	W0528 21:51:12.349325   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:12.349333   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:12.349393   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:12.388759   70393 cri.go:89] found id: ""
	I0528 21:51:12.388789   70393 logs.go:276] 0 containers: []
	W0528 21:51:12.388800   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:12.388811   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:12.388826   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:12.442308   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:12.442338   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:12.455824   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:12.455847   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:12.534582   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:12.534604   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:12.534620   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:12.622161   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:12.622192   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:15.163786   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:15.177280   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:15.177353   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:15.212022   70393 cri.go:89] found id: ""
	I0528 21:51:15.212047   70393 logs.go:276] 0 containers: []
	W0528 21:51:15.212058   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:15.212065   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:15.212126   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:15.247568   70393 cri.go:89] found id: ""
	I0528 21:51:15.247591   70393 logs.go:276] 0 containers: []
	W0528 21:51:15.247599   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:15.247607   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:15.247668   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:15.284567   70393 cri.go:89] found id: ""
	I0528 21:51:15.284591   70393 logs.go:276] 0 containers: []
	W0528 21:51:15.284598   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:15.284603   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:15.284650   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:15.328765   70393 cri.go:89] found id: ""
	I0528 21:51:15.328792   70393 logs.go:276] 0 containers: []
	W0528 21:51:15.328800   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:15.328806   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:15.328854   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:15.370465   70393 cri.go:89] found id: ""
	I0528 21:51:15.370493   70393 logs.go:276] 0 containers: []
	W0528 21:51:15.370504   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:15.370512   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:15.370574   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:15.410495   70393 cri.go:89] found id: ""
	I0528 21:51:15.410518   70393 logs.go:276] 0 containers: []
	W0528 21:51:15.410525   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:15.410531   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:15.410578   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:15.445349   70393 cri.go:89] found id: ""
	I0528 21:51:15.445386   70393 logs.go:276] 0 containers: []
	W0528 21:51:15.445395   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:15.445400   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:15.445445   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:15.483870   70393 cri.go:89] found id: ""
	I0528 21:51:15.483895   70393 logs.go:276] 0 containers: []
	W0528 21:51:15.483903   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:15.483911   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:15.483923   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:15.564169   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:15.564199   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:15.610751   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:15.610781   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:15.660664   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:15.660696   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:15.674496   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:15.674526   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:15.751566   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:18.251732   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:18.264953   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:18.265026   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:18.325737   70393 cri.go:89] found id: ""
	I0528 21:51:18.325771   70393 logs.go:276] 0 containers: []
	W0528 21:51:18.325782   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:18.325790   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:18.325862   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:18.359329   70393 cri.go:89] found id: ""
	I0528 21:51:18.359354   70393 logs.go:276] 0 containers: []
	W0528 21:51:18.359361   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:18.359367   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:18.359426   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:18.389909   70393 cri.go:89] found id: ""
	I0528 21:51:18.389934   70393 logs.go:276] 0 containers: []
	W0528 21:51:18.389942   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:18.389950   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:18.390005   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:18.430314   70393 cri.go:89] found id: ""
	I0528 21:51:18.430343   70393 logs.go:276] 0 containers: []
	W0528 21:51:18.430354   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:18.430362   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:18.430419   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:18.463824   70393 cri.go:89] found id: ""
	I0528 21:51:18.463852   70393 logs.go:276] 0 containers: []
	W0528 21:51:18.463863   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:18.463871   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:18.463930   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:18.498902   70393 cri.go:89] found id: ""
	I0528 21:51:18.498925   70393 logs.go:276] 0 containers: []
	W0528 21:51:18.498933   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:18.498938   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:18.498990   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:18.536591   70393 cri.go:89] found id: ""
	I0528 21:51:18.536617   70393 logs.go:276] 0 containers: []
	W0528 21:51:18.536624   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:18.536629   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:18.536684   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:18.571023   70393 cri.go:89] found id: ""
	I0528 21:51:18.571044   70393 logs.go:276] 0 containers: []
	W0528 21:51:18.571053   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:18.571061   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:18.571072   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:18.619796   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:18.619826   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:18.633615   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:18.633639   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:18.710651   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:18.710672   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:18.710688   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:18.793587   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:18.793623   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:21.335460   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:21.349191   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:21.349250   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:21.387399   70393 cri.go:89] found id: ""
	I0528 21:51:21.387421   70393 logs.go:276] 0 containers: []
	W0528 21:51:21.387428   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:21.387434   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:21.387484   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:21.425288   70393 cri.go:89] found id: ""
	I0528 21:51:21.425308   70393 logs.go:276] 0 containers: []
	W0528 21:51:21.425316   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:21.425321   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:21.425365   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:21.460639   70393 cri.go:89] found id: ""
	I0528 21:51:21.460667   70393 logs.go:276] 0 containers: []
	W0528 21:51:21.460677   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:21.460684   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:21.460768   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:21.494891   70393 cri.go:89] found id: ""
	I0528 21:51:21.494915   70393 logs.go:276] 0 containers: []
	W0528 21:51:21.494923   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:21.494929   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:21.494984   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:21.529749   70393 cri.go:89] found id: ""
	I0528 21:51:21.529789   70393 logs.go:276] 0 containers: []
	W0528 21:51:21.529800   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:21.529807   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:21.529864   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:21.565112   70393 cri.go:89] found id: ""
	I0528 21:51:21.565142   70393 logs.go:276] 0 containers: []
	W0528 21:51:21.565153   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:21.565162   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:21.565219   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:21.600164   70393 cri.go:89] found id: ""
	I0528 21:51:21.600191   70393 logs.go:276] 0 containers: []
	W0528 21:51:21.600199   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:21.600205   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:21.600266   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:21.636741   70393 cri.go:89] found id: ""
	I0528 21:51:21.636767   70393 logs.go:276] 0 containers: []
	W0528 21:51:21.636775   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:21.636787   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:21.636799   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:21.649995   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:21.650019   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:21.718910   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:21.718933   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:21.718949   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:21.800374   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:21.800411   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:21.840704   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:21.840730   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:24.392391   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:24.405556   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:24.405620   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:24.441105   70393 cri.go:89] found id: ""
	I0528 21:51:24.441138   70393 logs.go:276] 0 containers: []
	W0528 21:51:24.441146   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:24.441154   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:24.441201   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:24.479851   70393 cri.go:89] found id: ""
	I0528 21:51:24.479872   70393 logs.go:276] 0 containers: []
	W0528 21:51:24.479879   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:24.479885   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:24.479944   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:24.513160   70393 cri.go:89] found id: ""
	I0528 21:51:24.514671   70393 logs.go:276] 0 containers: []
	W0528 21:51:24.514684   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:24.514690   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:24.514737   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:24.549866   70393 cri.go:89] found id: ""
	I0528 21:51:24.549891   70393 logs.go:276] 0 containers: []
	W0528 21:51:24.549900   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:24.549906   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:24.549952   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:24.583566   70393 cri.go:89] found id: ""
	I0528 21:51:24.583590   70393 logs.go:276] 0 containers: []
	W0528 21:51:24.583598   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:24.583604   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:24.583653   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:24.619835   70393 cri.go:89] found id: ""
	I0528 21:51:24.619861   70393 logs.go:276] 0 containers: []
	W0528 21:51:24.619870   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:24.619877   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:24.619950   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:24.657193   70393 cri.go:89] found id: ""
	I0528 21:51:24.657221   70393 logs.go:276] 0 containers: []
	W0528 21:51:24.657232   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:24.657241   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:24.657300   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:24.701460   70393 cri.go:89] found id: ""
	I0528 21:51:24.701482   70393 logs.go:276] 0 containers: []
	W0528 21:51:24.701490   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:24.701499   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:24.701510   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:24.757410   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:24.757438   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:24.772672   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:24.772693   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:24.855618   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:24.855650   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:24.855665   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:24.935703   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:24.935753   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:27.476501   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:27.489276   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:27.489332   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:27.529579   70393 cri.go:89] found id: ""
	I0528 21:51:27.529606   70393 logs.go:276] 0 containers: []
	W0528 21:51:27.529613   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:27.529623   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:27.529687   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:27.564109   70393 cri.go:89] found id: ""
	I0528 21:51:27.564130   70393 logs.go:276] 0 containers: []
	W0528 21:51:27.564140   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:27.564146   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:27.564200   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:27.595751   70393 cri.go:89] found id: ""
	I0528 21:51:27.595774   70393 logs.go:276] 0 containers: []
	W0528 21:51:27.595784   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:27.595790   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:27.595846   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:27.629919   70393 cri.go:89] found id: ""
	I0528 21:51:27.629942   70393 logs.go:276] 0 containers: []
	W0528 21:51:27.629951   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:27.629958   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:27.630017   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:27.665885   70393 cri.go:89] found id: ""
	I0528 21:51:27.665911   70393 logs.go:276] 0 containers: []
	W0528 21:51:27.665921   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:27.665928   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:27.665995   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:27.701266   70393 cri.go:89] found id: ""
	I0528 21:51:27.701294   70393 logs.go:276] 0 containers: []
	W0528 21:51:27.701302   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:27.701319   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:27.701394   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:27.733778   70393 cri.go:89] found id: ""
	I0528 21:51:27.733803   70393 logs.go:276] 0 containers: []
	W0528 21:51:27.733815   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:27.733822   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:27.733885   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:27.769176   70393 cri.go:89] found id: ""
	I0528 21:51:27.769200   70393 logs.go:276] 0 containers: []
	W0528 21:51:27.769213   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:27.769223   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:27.769237   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:27.812033   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:27.812060   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:27.863400   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:27.863428   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:27.876546   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:27.876572   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:27.939213   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:27.939247   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:27.939260   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:30.518261   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:30.530631   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:30.530702   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:30.566056   70393 cri.go:89] found id: ""
	I0528 21:51:30.566082   70393 logs.go:276] 0 containers: []
	W0528 21:51:30.566093   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:30.566100   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:30.566161   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:30.600143   70393 cri.go:89] found id: ""
	I0528 21:51:30.600174   70393 logs.go:276] 0 containers: []
	W0528 21:51:30.600185   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:30.600193   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:30.600266   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:30.632639   70393 cri.go:89] found id: ""
	I0528 21:51:30.632664   70393 logs.go:276] 0 containers: []
	W0528 21:51:30.632672   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:30.632678   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:30.632737   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:30.667386   70393 cri.go:89] found id: ""
	I0528 21:51:30.667416   70393 logs.go:276] 0 containers: []
	W0528 21:51:30.667428   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:30.667436   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:30.667493   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:30.701685   70393 cri.go:89] found id: ""
	I0528 21:51:30.701711   70393 logs.go:276] 0 containers: []
	W0528 21:51:30.701718   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:30.701723   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:30.701789   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:30.736768   70393 cri.go:89] found id: ""
	I0528 21:51:30.736793   70393 logs.go:276] 0 containers: []
	W0528 21:51:30.736802   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:30.736810   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:30.736872   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:30.773025   70393 cri.go:89] found id: ""
	I0528 21:51:30.773047   70393 logs.go:276] 0 containers: []
	W0528 21:51:30.773055   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:30.773060   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:30.773108   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:30.806682   70393 cri.go:89] found id: ""
	I0528 21:51:30.806704   70393 logs.go:276] 0 containers: []
	W0528 21:51:30.806711   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:30.806719   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:30.806731   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:30.874571   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:30.874593   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:30.874604   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:30.959218   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:30.959252   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:30.999823   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:30.999860   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:31.053391   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:31.053421   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:33.567133   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:33.580639   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:33.580690   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:33.619631   70393 cri.go:89] found id: ""
	I0528 21:51:33.619658   70393 logs.go:276] 0 containers: []
	W0528 21:51:33.619667   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:33.619673   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:33.619725   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:33.654687   70393 cri.go:89] found id: ""
	I0528 21:51:33.654717   70393 logs.go:276] 0 containers: []
	W0528 21:51:33.654729   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:33.654735   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:33.654791   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:33.691691   70393 cri.go:89] found id: ""
	I0528 21:51:33.691721   70393 logs.go:276] 0 containers: []
	W0528 21:51:33.691731   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:33.691739   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:33.691805   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:33.729268   70393 cri.go:89] found id: ""
	I0528 21:51:33.729291   70393 logs.go:276] 0 containers: []
	W0528 21:51:33.729299   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:33.729305   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:33.729368   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:33.763548   70393 cri.go:89] found id: ""
	I0528 21:51:33.763570   70393 logs.go:276] 0 containers: []
	W0528 21:51:33.763578   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:33.763583   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:33.763629   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:33.797783   70393 cri.go:89] found id: ""
	I0528 21:51:33.797809   70393 logs.go:276] 0 containers: []
	W0528 21:51:33.797817   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:33.797824   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:33.797881   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:33.832569   70393 cri.go:89] found id: ""
	I0528 21:51:33.832596   70393 logs.go:276] 0 containers: []
	W0528 21:51:33.832604   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:33.832611   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:33.832669   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:33.874876   70393 cri.go:89] found id: ""
	I0528 21:51:33.874906   70393 logs.go:276] 0 containers: []
	W0528 21:51:33.874916   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:33.874940   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:33.874951   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:33.918610   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:33.918642   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:33.970803   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:33.970835   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:33.985428   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:33.985451   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:34.052799   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:34.052816   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:34.052828   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:36.637563   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:36.650836   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:36.650890   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:36.684618   70393 cri.go:89] found id: ""
	I0528 21:51:36.684647   70393 logs.go:276] 0 containers: []
	W0528 21:51:36.684657   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:36.684663   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:36.684732   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:36.720175   70393 cri.go:89] found id: ""
	I0528 21:51:36.720202   70393 logs.go:276] 0 containers: []
	W0528 21:51:36.720212   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:36.720226   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:36.720308   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:36.764435   70393 cri.go:89] found id: ""
	I0528 21:51:36.764459   70393 logs.go:276] 0 containers: []
	W0528 21:51:36.764469   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:36.764476   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:36.764537   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:36.795970   70393 cri.go:89] found id: ""
	I0528 21:51:36.795995   70393 logs.go:276] 0 containers: []
	W0528 21:51:36.796005   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:36.796012   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:36.796071   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:36.830394   70393 cri.go:89] found id: ""
	I0528 21:51:36.830418   70393 logs.go:276] 0 containers: []
	W0528 21:51:36.830428   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:36.830435   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:36.830490   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:36.865605   70393 cri.go:89] found id: ""
	I0528 21:51:36.865633   70393 logs.go:276] 0 containers: []
	W0528 21:51:36.865640   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:36.865645   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:36.865693   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:36.899533   70393 cri.go:89] found id: ""
	I0528 21:51:36.899561   70393 logs.go:276] 0 containers: []
	W0528 21:51:36.899568   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:36.899576   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:36.899627   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:36.931904   70393 cri.go:89] found id: ""
	I0528 21:51:36.931926   70393 logs.go:276] 0 containers: []
	W0528 21:51:36.931933   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:36.931941   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:36.931954   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:36.972407   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:36.972432   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:37.021303   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:37.021333   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:37.036132   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:37.036162   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:37.106735   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:37.106754   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:37.106766   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:39.683768   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:39.697811   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:39.697872   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:39.733063   70393 cri.go:89] found id: ""
	I0528 21:51:39.733091   70393 logs.go:276] 0 containers: []
	W0528 21:51:39.733102   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:39.733109   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:39.733172   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:39.768737   70393 cri.go:89] found id: ""
	I0528 21:51:39.768763   70393 logs.go:276] 0 containers: []
	W0528 21:51:39.768771   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:39.768776   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:39.768824   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:39.806961   70393 cri.go:89] found id: ""
	I0528 21:51:39.806993   70393 logs.go:276] 0 containers: []
	W0528 21:51:39.807001   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:39.807007   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:39.807062   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:39.842512   70393 cri.go:89] found id: ""
	I0528 21:51:39.842535   70393 logs.go:276] 0 containers: []
	W0528 21:51:39.842542   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:39.842548   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:39.842600   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:39.881647   70393 cri.go:89] found id: ""
	I0528 21:51:39.881680   70393 logs.go:276] 0 containers: []
	W0528 21:51:39.881697   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:39.881704   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:39.881751   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:39.915378   70393 cri.go:89] found id: ""
	I0528 21:51:39.915402   70393 logs.go:276] 0 containers: []
	W0528 21:51:39.915412   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:39.915420   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:39.915485   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:39.949932   70393 cri.go:89] found id: ""
	I0528 21:51:39.949957   70393 logs.go:276] 0 containers: []
	W0528 21:51:39.949966   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:39.949973   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:39.950045   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:39.986860   70393 cri.go:89] found id: ""
	I0528 21:51:39.986887   70393 logs.go:276] 0 containers: []
	W0528 21:51:39.986898   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:39.986909   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:39.986920   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:40.040314   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:40.040343   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:40.054311   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:40.054335   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:40.124036   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:40.124057   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:40.124070   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:40.207177   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:40.207211   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:42.745887   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:42.758688   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:42.758744   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:42.793343   70393 cri.go:89] found id: ""
	I0528 21:51:42.793374   70393 logs.go:276] 0 containers: []
	W0528 21:51:42.793386   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:42.793393   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:42.793466   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:42.827465   70393 cri.go:89] found id: ""
	I0528 21:51:42.827488   70393 logs.go:276] 0 containers: []
	W0528 21:51:42.827497   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:42.827508   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:42.827569   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:42.861868   70393 cri.go:89] found id: ""
	I0528 21:51:42.861892   70393 logs.go:276] 0 containers: []
	W0528 21:51:42.861903   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:42.861911   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:42.861977   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:42.898503   70393 cri.go:89] found id: ""
	I0528 21:51:42.898532   70393 logs.go:276] 0 containers: []
	W0528 21:51:42.898548   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:42.898555   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:42.898611   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:42.933909   70393 cri.go:89] found id: ""
	I0528 21:51:42.933939   70393 logs.go:276] 0 containers: []
	W0528 21:51:42.933949   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:42.933957   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:42.934019   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:42.967677   70393 cri.go:89] found id: ""
	I0528 21:51:42.967700   70393 logs.go:276] 0 containers: []
	W0528 21:51:42.967708   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:42.967713   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:42.967768   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:43.001540   70393 cri.go:89] found id: ""
	I0528 21:51:43.001564   70393 logs.go:276] 0 containers: []
	W0528 21:51:43.001572   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:43.001578   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:43.001626   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:43.035661   70393 cri.go:89] found id: ""
	I0528 21:51:43.035682   70393 logs.go:276] 0 containers: []
	W0528 21:51:43.035693   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:43.035703   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:43.035715   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:43.102604   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:43.102621   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:43.102636   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:43.180819   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:43.180859   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:43.219476   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:43.219505   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:43.271570   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:43.271602   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:45.786349   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:45.799172   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:45.799229   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:45.832772   70393 cri.go:89] found id: ""
	I0528 21:51:45.832796   70393 logs.go:276] 0 containers: []
	W0528 21:51:45.832807   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:45.832814   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:45.832882   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:45.867486   70393 cri.go:89] found id: ""
	I0528 21:51:45.867514   70393 logs.go:276] 0 containers: []
	W0528 21:51:45.867524   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:45.867532   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:45.867598   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:45.903525   70393 cri.go:89] found id: ""
	I0528 21:51:45.903555   70393 logs.go:276] 0 containers: []
	W0528 21:51:45.903566   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:45.903573   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:45.903633   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:45.937577   70393 cri.go:89] found id: ""
	I0528 21:51:45.937605   70393 logs.go:276] 0 containers: []
	W0528 21:51:45.937613   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:45.937618   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:45.937665   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:45.973586   70393 cri.go:89] found id: ""
	I0528 21:51:45.973614   70393 logs.go:276] 0 containers: []
	W0528 21:51:45.973621   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:45.973626   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:45.973672   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:46.007373   70393 cri.go:89] found id: ""
	I0528 21:51:46.007394   70393 logs.go:276] 0 containers: []
	W0528 21:51:46.007401   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:46.007407   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:46.007450   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:46.041440   70393 cri.go:89] found id: ""
	I0528 21:51:46.041470   70393 logs.go:276] 0 containers: []
	W0528 21:51:46.041481   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:46.041488   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:46.041551   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:46.079705   70393 cri.go:89] found id: ""
	I0528 21:51:46.079729   70393 logs.go:276] 0 containers: []
	W0528 21:51:46.079736   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:46.079750   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:46.079768   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:46.148710   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:46.148731   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:46.148743   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:46.227084   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:46.227117   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:46.270620   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:46.270654   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:46.324724   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:46.324758   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:48.838902   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:48.852075   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:48.852140   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:48.893555   70393 cri.go:89] found id: ""
	I0528 21:51:48.893583   70393 logs.go:276] 0 containers: []
	W0528 21:51:48.893589   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:48.893595   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:48.893652   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:48.928104   70393 cri.go:89] found id: ""
	I0528 21:51:48.928130   70393 logs.go:276] 0 containers: []
	W0528 21:51:48.928138   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:48.928144   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:48.928190   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:48.961348   70393 cri.go:89] found id: ""
	I0528 21:51:48.961372   70393 logs.go:276] 0 containers: []
	W0528 21:51:48.961379   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:48.961385   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:48.961434   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:48.995448   70393 cri.go:89] found id: ""
	I0528 21:51:48.995470   70393 logs.go:276] 0 containers: []
	W0528 21:51:48.995477   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:48.995483   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:48.995525   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:49.029588   70393 cri.go:89] found id: ""
	I0528 21:51:49.029614   70393 logs.go:276] 0 containers: []
	W0528 21:51:49.029624   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:49.029639   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:49.029697   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:49.066933   70393 cri.go:89] found id: ""
	I0528 21:51:49.066957   70393 logs.go:276] 0 containers: []
	W0528 21:51:49.066975   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:49.066983   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:49.067043   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:49.102583   70393 cri.go:89] found id: ""
	I0528 21:51:49.102607   70393 logs.go:276] 0 containers: []
	W0528 21:51:49.102617   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:49.102625   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:49.102679   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:49.134163   70393 cri.go:89] found id: ""
	I0528 21:51:49.134185   70393 logs.go:276] 0 containers: []
	W0528 21:51:49.134195   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:49.134204   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:49.134219   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:49.181791   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:49.181824   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:49.195409   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:49.195439   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:49.265845   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:49.265869   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:49.265884   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:49.344063   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:49.344096   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:51.885075   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:51.900702   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:51.900773   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:51.936694   70393 cri.go:89] found id: ""
	I0528 21:51:51.936720   70393 logs.go:276] 0 containers: []
	W0528 21:51:51.936728   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:51.936735   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:51.936782   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:51.978849   70393 cri.go:89] found id: ""
	I0528 21:51:51.978878   70393 logs.go:276] 0 containers: []
	W0528 21:51:51.978885   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:51.978891   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:51.978972   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:52.012678   70393 cri.go:89] found id: ""
	I0528 21:51:52.012700   70393 logs.go:276] 0 containers: []
	W0528 21:51:52.012708   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:52.012713   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:52.012761   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:52.047767   70393 cri.go:89] found id: ""
	I0528 21:51:52.047797   70393 logs.go:276] 0 containers: []
	W0528 21:51:52.047807   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:52.047815   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:52.047876   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:52.086084   70393 cri.go:89] found id: ""
	I0528 21:51:52.086107   70393 logs.go:276] 0 containers: []
	W0528 21:51:52.086115   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:52.086121   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:52.086174   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:52.123529   70393 cri.go:89] found id: ""
	I0528 21:51:52.123559   70393 logs.go:276] 0 containers: []
	W0528 21:51:52.123568   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:52.123575   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:52.123637   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:52.163567   70393 cri.go:89] found id: ""
	I0528 21:51:52.163595   70393 logs.go:276] 0 containers: []
	W0528 21:51:52.163605   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:52.163612   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:52.163668   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:52.201420   70393 cri.go:89] found id: ""
	I0528 21:51:52.201447   70393 logs.go:276] 0 containers: []
	W0528 21:51:52.201457   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:52.201468   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:52.201484   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:52.254951   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:52.254985   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:52.269569   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:52.269608   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:52.341893   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:52.341912   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:52.341922   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:52.420033   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:52.420067   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:54.961639   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:54.976106   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:54.976169   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:55.008887   70393 cri.go:89] found id: ""
	I0528 21:51:55.008916   70393 logs.go:276] 0 containers: []
	W0528 21:51:55.008926   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:55.008934   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:55.009001   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:55.043703   70393 cri.go:89] found id: ""
	I0528 21:51:55.043728   70393 logs.go:276] 0 containers: []
	W0528 21:51:55.043736   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:55.043742   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:55.043793   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:55.078397   70393 cri.go:89] found id: ""
	I0528 21:51:55.078427   70393 logs.go:276] 0 containers: []
	W0528 21:51:55.078436   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:55.078443   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:55.078506   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:55.113551   70393 cri.go:89] found id: ""
	I0528 21:51:55.113578   70393 logs.go:276] 0 containers: []
	W0528 21:51:55.113586   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:55.113592   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:55.113643   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:55.148401   70393 cri.go:89] found id: ""
	I0528 21:51:55.148428   70393 logs.go:276] 0 containers: []
	W0528 21:51:55.148437   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:55.148448   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:55.148507   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:55.183595   70393 cri.go:89] found id: ""
	I0528 21:51:55.183624   70393 logs.go:276] 0 containers: []
	W0528 21:51:55.183636   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:55.183645   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:55.183715   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:55.219179   70393 cri.go:89] found id: ""
	I0528 21:51:55.219205   70393 logs.go:276] 0 containers: []
	W0528 21:51:55.219213   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:55.219223   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:55.219301   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:55.252996   70393 cri.go:89] found id: ""
	I0528 21:51:55.253025   70393 logs.go:276] 0 containers: []
	W0528 21:51:55.253035   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:55.253045   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:55.253059   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:55.309070   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:55.309104   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:55.323861   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:55.323887   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:55.397138   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:55.397161   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:55.397177   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:55.472381   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:55.472415   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:51:58.015496   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:51:58.030492   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:51:58.030558   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:51:58.065780   70393 cri.go:89] found id: ""
	I0528 21:51:58.065812   70393 logs.go:276] 0 containers: []
	W0528 21:51:58.065823   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:51:58.065830   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:51:58.065890   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:51:58.108617   70393 cri.go:89] found id: ""
	I0528 21:51:58.108649   70393 logs.go:276] 0 containers: []
	W0528 21:51:58.108660   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:51:58.108668   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:51:58.108732   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:51:58.149848   70393 cri.go:89] found id: ""
	I0528 21:51:58.149874   70393 logs.go:276] 0 containers: []
	W0528 21:51:58.149884   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:51:58.149891   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:51:58.149961   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:51:58.189756   70393 cri.go:89] found id: ""
	I0528 21:51:58.189798   70393 logs.go:276] 0 containers: []
	W0528 21:51:58.189809   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:51:58.189820   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:51:58.189880   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:51:58.226682   70393 cri.go:89] found id: ""
	I0528 21:51:58.226713   70393 logs.go:276] 0 containers: []
	W0528 21:51:58.226724   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:51:58.226738   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:51:58.226798   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:51:58.267175   70393 cri.go:89] found id: ""
	I0528 21:51:58.267198   70393 logs.go:276] 0 containers: []
	W0528 21:51:58.267204   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:51:58.267210   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:51:58.267266   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:51:58.299271   70393 cri.go:89] found id: ""
	I0528 21:51:58.299297   70393 logs.go:276] 0 containers: []
	W0528 21:51:58.299304   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:51:58.299311   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:51:58.299369   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:51:58.338285   70393 cri.go:89] found id: ""
	I0528 21:51:58.338315   70393 logs.go:276] 0 containers: []
	W0528 21:51:58.338325   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:51:58.338336   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:51:58.338352   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:51:58.392118   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:51:58.392151   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:51:58.407179   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:51:58.407207   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:51:58.474091   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:51:58.474115   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:51:58.474131   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:51:58.557830   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:51:58.557867   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:01.101645   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:01.115356   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:01.115436   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:01.154841   70393 cri.go:89] found id: ""
	I0528 21:52:01.154868   70393 logs.go:276] 0 containers: []
	W0528 21:52:01.154878   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:01.154885   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:01.154949   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:01.196043   70393 cri.go:89] found id: ""
	I0528 21:52:01.196066   70393 logs.go:276] 0 containers: []
	W0528 21:52:01.196074   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:01.196080   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:01.196186   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:01.238218   70393 cri.go:89] found id: ""
	I0528 21:52:01.238240   70393 logs.go:276] 0 containers: []
	W0528 21:52:01.238248   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:01.238253   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:01.238300   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:01.276794   70393 cri.go:89] found id: ""
	I0528 21:52:01.276822   70393 logs.go:276] 0 containers: []
	W0528 21:52:01.276831   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:01.276839   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:01.276904   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:01.317463   70393 cri.go:89] found id: ""
	I0528 21:52:01.317490   70393 logs.go:276] 0 containers: []
	W0528 21:52:01.317500   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:01.317506   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:01.317568   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:01.353185   70393 cri.go:89] found id: ""
	I0528 21:52:01.353214   70393 logs.go:276] 0 containers: []
	W0528 21:52:01.353226   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:01.353233   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:01.353292   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:01.394447   70393 cri.go:89] found id: ""
	I0528 21:52:01.394475   70393 logs.go:276] 0 containers: []
	W0528 21:52:01.394486   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:01.394493   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:01.394556   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:01.430490   70393 cri.go:89] found id: ""
	I0528 21:52:01.430515   70393 logs.go:276] 0 containers: []
	W0528 21:52:01.430527   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:01.430536   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:01.430551   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:01.508415   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:01.508453   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:01.550362   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:01.550392   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:01.607514   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:01.607555   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:01.623626   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:01.623659   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:01.703559   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:04.204695   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:04.219495   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:04.219560   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:04.257192   70393 cri.go:89] found id: ""
	I0528 21:52:04.257232   70393 logs.go:276] 0 containers: []
	W0528 21:52:04.257244   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:04.257252   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:04.257312   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:04.316018   70393 cri.go:89] found id: ""
	I0528 21:52:04.316040   70393 logs.go:276] 0 containers: []
	W0528 21:52:04.316050   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:04.316061   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:04.316120   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:04.360233   70393 cri.go:89] found id: ""
	I0528 21:52:04.360263   70393 logs.go:276] 0 containers: []
	W0528 21:52:04.360275   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:04.360282   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:04.360345   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:04.395681   70393 cri.go:89] found id: ""
	I0528 21:52:04.395709   70393 logs.go:276] 0 containers: []
	W0528 21:52:04.395718   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:04.395725   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:04.395793   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:04.432699   70393 cri.go:89] found id: ""
	I0528 21:52:04.432734   70393 logs.go:276] 0 containers: []
	W0528 21:52:04.432746   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:04.432753   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:04.432815   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:04.469777   70393 cri.go:89] found id: ""
	I0528 21:52:04.469811   70393 logs.go:276] 0 containers: []
	W0528 21:52:04.469822   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:04.469831   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:04.469893   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:04.504748   70393 cri.go:89] found id: ""
	I0528 21:52:04.504777   70393 logs.go:276] 0 containers: []
	W0528 21:52:04.504788   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:04.504795   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:04.504857   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:04.538974   70393 cri.go:89] found id: ""
	I0528 21:52:04.539002   70393 logs.go:276] 0 containers: []
	W0528 21:52:04.539012   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:04.539022   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:04.539036   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:04.591542   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:04.591574   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:04.606074   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:04.606100   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:04.680046   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:04.680067   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:04.680081   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:04.759880   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:04.759909   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:07.300613   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:07.314243   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:07.314307   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:07.352343   70393 cri.go:89] found id: ""
	I0528 21:52:07.352368   70393 logs.go:276] 0 containers: []
	W0528 21:52:07.352376   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:07.352382   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:07.352439   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:07.390222   70393 cri.go:89] found id: ""
	I0528 21:52:07.390244   70393 logs.go:276] 0 containers: []
	W0528 21:52:07.390256   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:07.390262   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:07.390316   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:07.427969   70393 cri.go:89] found id: ""
	I0528 21:52:07.428001   70393 logs.go:276] 0 containers: []
	W0528 21:52:07.428012   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:07.428020   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:07.428069   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:07.464171   70393 cri.go:89] found id: ""
	I0528 21:52:07.464198   70393 logs.go:276] 0 containers: []
	W0528 21:52:07.464209   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:07.464217   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:07.464290   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:07.504560   70393 cri.go:89] found id: ""
	I0528 21:52:07.504581   70393 logs.go:276] 0 containers: []
	W0528 21:52:07.504588   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:07.504598   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:07.504642   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:07.544009   70393 cri.go:89] found id: ""
	I0528 21:52:07.544037   70393 logs.go:276] 0 containers: []
	W0528 21:52:07.544044   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:07.544050   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:07.544103   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:07.578812   70393 cri.go:89] found id: ""
	I0528 21:52:07.578833   70393 logs.go:276] 0 containers: []
	W0528 21:52:07.578840   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:07.578846   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:07.578906   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:07.616393   70393 cri.go:89] found id: ""
	I0528 21:52:07.616419   70393 logs.go:276] 0 containers: []
	W0528 21:52:07.616430   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:07.616439   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:07.616452   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:07.632357   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:07.632383   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:07.707987   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:07.708012   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:07.708031   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:07.785017   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:07.785053   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:07.822011   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:07.822040   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:10.373633   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:10.387258   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:10.387319   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:10.420517   70393 cri.go:89] found id: ""
	I0528 21:52:10.420544   70393 logs.go:276] 0 containers: []
	W0528 21:52:10.420552   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:10.420558   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:10.420610   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:10.453213   70393 cri.go:89] found id: ""
	I0528 21:52:10.453233   70393 logs.go:276] 0 containers: []
	W0528 21:52:10.453240   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:10.453246   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:10.453293   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:10.488911   70393 cri.go:89] found id: ""
	I0528 21:52:10.488938   70393 logs.go:276] 0 containers: []
	W0528 21:52:10.488945   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:10.488961   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:10.489019   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:10.523867   70393 cri.go:89] found id: ""
	I0528 21:52:10.523891   70393 logs.go:276] 0 containers: []
	W0528 21:52:10.523898   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:10.523903   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:10.523966   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:10.566434   70393 cri.go:89] found id: ""
	I0528 21:52:10.566466   70393 logs.go:276] 0 containers: []
	W0528 21:52:10.566478   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:10.566485   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:10.566550   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:10.603871   70393 cri.go:89] found id: ""
	I0528 21:52:10.603889   70393 logs.go:276] 0 containers: []
	W0528 21:52:10.603896   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:10.603902   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:10.603955   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:10.641836   70393 cri.go:89] found id: ""
	I0528 21:52:10.641869   70393 logs.go:276] 0 containers: []
	W0528 21:52:10.641881   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:10.641890   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:10.641955   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:10.680272   70393 cri.go:89] found id: ""
	I0528 21:52:10.680304   70393 logs.go:276] 0 containers: []
	W0528 21:52:10.680322   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:10.680333   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:10.680347   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:10.732628   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:10.732665   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:10.747467   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:10.747507   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:10.827009   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:10.827035   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:10.827054   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:10.907038   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:10.907071   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:13.457300   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:13.470813   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:13.470893   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:13.505175   70393 cri.go:89] found id: ""
	I0528 21:52:13.505197   70393 logs.go:276] 0 containers: []
	W0528 21:52:13.505205   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:13.505211   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:13.505278   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:13.541875   70393 cri.go:89] found id: ""
	I0528 21:52:13.541901   70393 logs.go:276] 0 containers: []
	W0528 21:52:13.541911   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:13.541918   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:13.541973   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:13.578291   70393 cri.go:89] found id: ""
	I0528 21:52:13.578317   70393 logs.go:276] 0 containers: []
	W0528 21:52:13.578328   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:13.578335   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:13.578390   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:13.613174   70393 cri.go:89] found id: ""
	I0528 21:52:13.613201   70393 logs.go:276] 0 containers: []
	W0528 21:52:13.613212   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:13.613219   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:13.613277   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:13.654376   70393 cri.go:89] found id: ""
	I0528 21:52:13.654402   70393 logs.go:276] 0 containers: []
	W0528 21:52:13.654412   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:13.654419   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:13.654485   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:13.699858   70393 cri.go:89] found id: ""
	I0528 21:52:13.699887   70393 logs.go:276] 0 containers: []
	W0528 21:52:13.699898   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:13.699909   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:13.699968   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:13.735850   70393 cri.go:89] found id: ""
	I0528 21:52:13.735872   70393 logs.go:276] 0 containers: []
	W0528 21:52:13.735880   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:13.735887   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:13.735946   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:13.772678   70393 cri.go:89] found id: ""
	I0528 21:52:13.772709   70393 logs.go:276] 0 containers: []
	W0528 21:52:13.772719   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:13.772729   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:13.772743   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:13.828471   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:13.828504   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:13.842020   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:13.842050   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:13.908875   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:13.908901   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:13.908917   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:13.987443   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:13.987486   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:16.528033   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:16.542702   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:16.542761   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:16.579537   70393 cri.go:89] found id: ""
	I0528 21:52:16.579564   70393 logs.go:276] 0 containers: []
	W0528 21:52:16.579575   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:16.579582   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:16.579642   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:16.613478   70393 cri.go:89] found id: ""
	I0528 21:52:16.613500   70393 logs.go:276] 0 containers: []
	W0528 21:52:16.613506   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:16.613512   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:16.613551   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:16.647854   70393 cri.go:89] found id: ""
	I0528 21:52:16.647883   70393 logs.go:276] 0 containers: []
	W0528 21:52:16.647900   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:16.647907   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:16.647968   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:16.692206   70393 cri.go:89] found id: ""
	I0528 21:52:16.692241   70393 logs.go:276] 0 containers: []
	W0528 21:52:16.692251   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:16.692258   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:16.692323   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:16.736910   70393 cri.go:89] found id: ""
	I0528 21:52:16.736936   70393 logs.go:276] 0 containers: []
	W0528 21:52:16.736946   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:16.736953   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:16.737015   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:16.788365   70393 cri.go:89] found id: ""
	I0528 21:52:16.788395   70393 logs.go:276] 0 containers: []
	W0528 21:52:16.788406   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:16.788415   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:16.788477   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:16.830770   70393 cri.go:89] found id: ""
	I0528 21:52:16.830795   70393 logs.go:276] 0 containers: []
	W0528 21:52:16.830807   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:16.830818   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:16.830873   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:16.864649   70393 cri.go:89] found id: ""
	I0528 21:52:16.864677   70393 logs.go:276] 0 containers: []
	W0528 21:52:16.864688   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:16.864698   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:16.864712   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:16.915210   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:16.915234   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:16.929136   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:16.929159   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:16.994502   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:16.994527   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:16.994540   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:17.070054   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:17.070086   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:19.613043   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:19.626147   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:19.626213   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:19.665733   70393 cri.go:89] found id: ""
	I0528 21:52:19.665755   70393 logs.go:276] 0 containers: []
	W0528 21:52:19.665772   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:19.665780   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:19.665846   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:19.702826   70393 cri.go:89] found id: ""
	I0528 21:52:19.702852   70393 logs.go:276] 0 containers: []
	W0528 21:52:19.702863   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:19.702870   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:19.702933   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:19.740962   70393 cri.go:89] found id: ""
	I0528 21:52:19.740990   70393 logs.go:276] 0 containers: []
	W0528 21:52:19.740998   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:19.741003   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:19.741049   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:19.775732   70393 cri.go:89] found id: ""
	I0528 21:52:19.775754   70393 logs.go:276] 0 containers: []
	W0528 21:52:19.775761   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:19.775766   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:19.775812   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:19.809700   70393 cri.go:89] found id: ""
	I0528 21:52:19.809728   70393 logs.go:276] 0 containers: []
	W0528 21:52:19.809739   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:19.809746   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:19.809818   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:19.842644   70393 cri.go:89] found id: ""
	I0528 21:52:19.842683   70393 logs.go:276] 0 containers: []
	W0528 21:52:19.842691   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:19.842698   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:19.842746   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:19.876136   70393 cri.go:89] found id: ""
	I0528 21:52:19.876168   70393 logs.go:276] 0 containers: []
	W0528 21:52:19.876179   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:19.876187   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:19.876247   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:19.907992   70393 cri.go:89] found id: ""
	I0528 21:52:19.908022   70393 logs.go:276] 0 containers: []
	W0528 21:52:19.908032   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:19.908049   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:19.908066   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:19.958285   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:19.958315   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:19.971881   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:19.971906   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:20.037575   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:20.037595   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:20.037613   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:20.124397   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:20.124430   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:22.670780   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:22.683208   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:22.683265   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:22.737329   70393 cri.go:89] found id: ""
	I0528 21:52:22.737359   70393 logs.go:276] 0 containers: []
	W0528 21:52:22.737371   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:22.737379   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:22.737447   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:22.791896   70393 cri.go:89] found id: ""
	I0528 21:52:22.791921   70393 logs.go:276] 0 containers: []
	W0528 21:52:22.791935   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:22.791943   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:22.792009   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:22.828104   70393 cri.go:89] found id: ""
	I0528 21:52:22.828133   70393 logs.go:276] 0 containers: []
	W0528 21:52:22.828143   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:22.828150   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:22.828209   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:22.859083   70393 cri.go:89] found id: ""
	I0528 21:52:22.859114   70393 logs.go:276] 0 containers: []
	W0528 21:52:22.859125   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:22.859135   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:22.859198   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:22.892470   70393 cri.go:89] found id: ""
	I0528 21:52:22.892494   70393 logs.go:276] 0 containers: []
	W0528 21:52:22.892502   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:22.892507   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:22.892554   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:22.928541   70393 cri.go:89] found id: ""
	I0528 21:52:22.928572   70393 logs.go:276] 0 containers: []
	W0528 21:52:22.928583   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:22.928591   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:22.928649   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:22.973268   70393 cri.go:89] found id: ""
	I0528 21:52:22.973295   70393 logs.go:276] 0 containers: []
	W0528 21:52:22.973305   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:22.973312   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:22.973368   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:23.011828   70393 cri.go:89] found id: ""
	I0528 21:52:23.011849   70393 logs.go:276] 0 containers: []
	W0528 21:52:23.011857   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:23.011865   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:23.011876   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:23.025343   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:23.025384   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:23.098723   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:23.098741   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:23.098753   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:23.178096   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:23.178130   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:23.218916   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:23.218945   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:25.769875   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:25.782381   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:25.782439   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:25.819401   70393 cri.go:89] found id: ""
	I0528 21:52:25.819421   70393 logs.go:276] 0 containers: []
	W0528 21:52:25.819428   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:25.819438   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:25.819505   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:25.860933   70393 cri.go:89] found id: ""
	I0528 21:52:25.860954   70393 logs.go:276] 0 containers: []
	W0528 21:52:25.860961   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:25.860966   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:25.861023   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:25.899354   70393 cri.go:89] found id: ""
	I0528 21:52:25.899373   70393 logs.go:276] 0 containers: []
	W0528 21:52:25.899381   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:25.899386   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:25.899432   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:25.934135   70393 cri.go:89] found id: ""
	I0528 21:52:25.934158   70393 logs.go:276] 0 containers: []
	W0528 21:52:25.934170   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:25.934176   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:25.934227   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:25.971285   70393 cri.go:89] found id: ""
	I0528 21:52:25.971309   70393 logs.go:276] 0 containers: []
	W0528 21:52:25.971317   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:25.971322   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:25.971371   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:26.005501   70393 cri.go:89] found id: ""
	I0528 21:52:26.005526   70393 logs.go:276] 0 containers: []
	W0528 21:52:26.005534   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:26.005540   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:26.005605   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:26.040685   70393 cri.go:89] found id: ""
	I0528 21:52:26.040708   70393 logs.go:276] 0 containers: []
	W0528 21:52:26.040716   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:26.040725   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:26.040780   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:26.080513   70393 cri.go:89] found id: ""
	I0528 21:52:26.080535   70393 logs.go:276] 0 containers: []
	W0528 21:52:26.080542   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:26.080552   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:26.080565   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:26.119886   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:26.119918   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:26.171610   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:26.171647   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:26.185846   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:26.185871   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:26.251207   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:26.251232   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:26.251253   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:28.828312   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:28.841672   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:28.841730   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:28.876132   70393 cri.go:89] found id: ""
	I0528 21:52:28.876154   70393 logs.go:276] 0 containers: []
	W0528 21:52:28.876161   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:28.876168   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:28.876230   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:28.912876   70393 cri.go:89] found id: ""
	I0528 21:52:28.912903   70393 logs.go:276] 0 containers: []
	W0528 21:52:28.912910   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:28.912916   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:28.912967   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:28.949143   70393 cri.go:89] found id: ""
	I0528 21:52:28.949172   70393 logs.go:276] 0 containers: []
	W0528 21:52:28.949183   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:28.949190   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:28.949253   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:28.984487   70393 cri.go:89] found id: ""
	I0528 21:52:28.984516   70393 logs.go:276] 0 containers: []
	W0528 21:52:28.984527   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:28.984535   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:28.984600   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:29.019262   70393 cri.go:89] found id: ""
	I0528 21:52:29.019288   70393 logs.go:276] 0 containers: []
	W0528 21:52:29.019297   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:29.019303   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:29.019354   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:29.054863   70393 cri.go:89] found id: ""
	I0528 21:52:29.054889   70393 logs.go:276] 0 containers: []
	W0528 21:52:29.054896   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:29.054902   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:29.054948   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:29.095627   70393 cri.go:89] found id: ""
	I0528 21:52:29.095653   70393 logs.go:276] 0 containers: []
	W0528 21:52:29.095659   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:29.095665   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:29.095710   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:29.130848   70393 cri.go:89] found id: ""
	I0528 21:52:29.130875   70393 logs.go:276] 0 containers: []
	W0528 21:52:29.130882   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:29.130891   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:29.130904   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:29.146261   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:29.146286   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:29.220922   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:29.220945   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:29.220960   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:29.299404   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:29.299438   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:29.339118   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:29.339143   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:31.894505   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:31.907193   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:31.907261   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:31.941831   70393 cri.go:89] found id: ""
	I0528 21:52:31.941858   70393 logs.go:276] 0 containers: []
	W0528 21:52:31.941866   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:31.941871   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:31.941920   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:31.977307   70393 cri.go:89] found id: ""
	I0528 21:52:31.977333   70393 logs.go:276] 0 containers: []
	W0528 21:52:31.977344   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:31.977351   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:31.977412   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:32.014510   70393 cri.go:89] found id: ""
	I0528 21:52:32.014543   70393 logs.go:276] 0 containers: []
	W0528 21:52:32.014552   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:32.014562   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:32.014620   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:32.049080   70393 cri.go:89] found id: ""
	I0528 21:52:32.049105   70393 logs.go:276] 0 containers: []
	W0528 21:52:32.049113   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:32.049119   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:32.049186   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:32.083496   70393 cri.go:89] found id: ""
	I0528 21:52:32.083524   70393 logs.go:276] 0 containers: []
	W0528 21:52:32.083534   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:32.083540   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:32.083594   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:32.116540   70393 cri.go:89] found id: ""
	I0528 21:52:32.116563   70393 logs.go:276] 0 containers: []
	W0528 21:52:32.116570   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:32.116576   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:32.116625   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:32.153925   70393 cri.go:89] found id: ""
	I0528 21:52:32.153954   70393 logs.go:276] 0 containers: []
	W0528 21:52:32.153964   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:32.153970   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:32.154033   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:32.188089   70393 cri.go:89] found id: ""
	I0528 21:52:32.188117   70393 logs.go:276] 0 containers: []
	W0528 21:52:32.188128   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:32.188138   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:32.188153   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:32.237663   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:32.237692   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:32.253554   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:32.253581   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:32.323224   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:32.323253   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:32.323265   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:32.407033   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:32.407074   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:34.945433   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:34.960269   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:34.960344   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:34.996508   70393 cri.go:89] found id: ""
	I0528 21:52:34.996536   70393 logs.go:276] 0 containers: []
	W0528 21:52:34.996544   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:34.996552   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:34.996611   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:35.032983   70393 cri.go:89] found id: ""
	I0528 21:52:35.033007   70393 logs.go:276] 0 containers: []
	W0528 21:52:35.033015   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:35.033020   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:35.033071   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:35.066860   70393 cri.go:89] found id: ""
	I0528 21:52:35.066882   70393 logs.go:276] 0 containers: []
	W0528 21:52:35.066892   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:35.066899   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:35.066960   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:35.102801   70393 cri.go:89] found id: ""
	I0528 21:52:35.102833   70393 logs.go:276] 0 containers: []
	W0528 21:52:35.102843   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:35.102851   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:35.102907   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:35.135678   70393 cri.go:89] found id: ""
	I0528 21:52:35.135716   70393 logs.go:276] 0 containers: []
	W0528 21:52:35.135727   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:35.135734   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:35.135801   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:35.174322   70393 cri.go:89] found id: ""
	I0528 21:52:35.174345   70393 logs.go:276] 0 containers: []
	W0528 21:52:35.174352   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:35.174357   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:35.174414   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:35.215117   70393 cri.go:89] found id: ""
	I0528 21:52:35.215140   70393 logs.go:276] 0 containers: []
	W0528 21:52:35.215148   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:35.215154   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:35.215213   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:35.253639   70393 cri.go:89] found id: ""
	I0528 21:52:35.253658   70393 logs.go:276] 0 containers: []
	W0528 21:52:35.253668   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:35.253678   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:35.253692   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:35.306020   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:35.306051   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:35.321371   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:35.321393   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:35.397140   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:35.397161   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:35.397175   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:35.475009   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:35.475042   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:38.021509   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:38.034449   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:38.034523   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:38.069292   70393 cri.go:89] found id: ""
	I0528 21:52:38.069319   70393 logs.go:276] 0 containers: []
	W0528 21:52:38.069328   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:38.069335   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:38.069396   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:38.100677   70393 cri.go:89] found id: ""
	I0528 21:52:38.100700   70393 logs.go:276] 0 containers: []
	W0528 21:52:38.100709   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:38.100715   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:38.100774   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:38.140292   70393 cri.go:89] found id: ""
	I0528 21:52:38.140320   70393 logs.go:276] 0 containers: []
	W0528 21:52:38.140331   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:38.140338   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:38.140394   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:38.176886   70393 cri.go:89] found id: ""
	I0528 21:52:38.176918   70393 logs.go:276] 0 containers: []
	W0528 21:52:38.176930   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:38.176938   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:38.177009   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:38.213733   70393 cri.go:89] found id: ""
	I0528 21:52:38.213772   70393 logs.go:276] 0 containers: []
	W0528 21:52:38.213784   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:38.213791   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:38.213850   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:38.250263   70393 cri.go:89] found id: ""
	I0528 21:52:38.250283   70393 logs.go:276] 0 containers: []
	W0528 21:52:38.250289   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:38.250295   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:38.250348   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:38.286347   70393 cri.go:89] found id: ""
	I0528 21:52:38.286368   70393 logs.go:276] 0 containers: []
	W0528 21:52:38.286375   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:38.286380   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:38.286436   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:38.320198   70393 cri.go:89] found id: ""
	I0528 21:52:38.320241   70393 logs.go:276] 0 containers: []
	W0528 21:52:38.320254   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:38.320267   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:38.320282   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:38.374391   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:38.374421   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:38.387930   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:38.387955   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:38.459203   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:38.459221   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:38.459233   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:38.534789   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:38.534816   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:41.073296   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:41.087377   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:41.087453   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:41.122849   70393 cri.go:89] found id: ""
	I0528 21:52:41.122879   70393 logs.go:276] 0 containers: []
	W0528 21:52:41.122889   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:41.122897   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:41.122964   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:41.158864   70393 cri.go:89] found id: ""
	I0528 21:52:41.158895   70393 logs.go:276] 0 containers: []
	W0528 21:52:41.158907   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:41.158915   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:41.158985   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:41.194401   70393 cri.go:89] found id: ""
	I0528 21:52:41.194427   70393 logs.go:276] 0 containers: []
	W0528 21:52:41.194436   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:41.194444   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:41.194490   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:41.229975   70393 cri.go:89] found id: ""
	I0528 21:52:41.230005   70393 logs.go:276] 0 containers: []
	W0528 21:52:41.230016   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:41.230026   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:41.230086   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:41.270461   70393 cri.go:89] found id: ""
	I0528 21:52:41.270490   70393 logs.go:276] 0 containers: []
	W0528 21:52:41.270501   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:41.270508   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:41.270570   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:41.305115   70393 cri.go:89] found id: ""
	I0528 21:52:41.305148   70393 logs.go:276] 0 containers: []
	W0528 21:52:41.305159   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:41.305167   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:41.305230   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:41.341144   70393 cri.go:89] found id: ""
	I0528 21:52:41.341172   70393 logs.go:276] 0 containers: []
	W0528 21:52:41.341180   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:41.341186   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:41.341246   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:41.375591   70393 cri.go:89] found id: ""
	I0528 21:52:41.375616   70393 logs.go:276] 0 containers: []
	W0528 21:52:41.375626   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:41.375636   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:41.375651   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:41.427914   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:41.427951   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:41.441263   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:41.441289   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:41.512335   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:41.512356   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:41.512374   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:41.594670   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:41.594713   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:44.137847   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:44.151284   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:44.151347   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:44.193124   70393 cri.go:89] found id: ""
	I0528 21:52:44.193149   70393 logs.go:276] 0 containers: []
	W0528 21:52:44.193157   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:44.193162   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:44.193208   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:44.228728   70393 cri.go:89] found id: ""
	I0528 21:52:44.228753   70393 logs.go:276] 0 containers: []
	W0528 21:52:44.228761   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:44.228767   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:44.228813   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:44.264568   70393 cri.go:89] found id: ""
	I0528 21:52:44.264591   70393 logs.go:276] 0 containers: []
	W0528 21:52:44.264599   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:44.264604   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:44.264649   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:44.299017   70393 cri.go:89] found id: ""
	I0528 21:52:44.299044   70393 logs.go:276] 0 containers: []
	W0528 21:52:44.299054   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:44.299061   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:44.299123   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:44.331809   70393 cri.go:89] found id: ""
	I0528 21:52:44.331838   70393 logs.go:276] 0 containers: []
	W0528 21:52:44.331848   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:44.331854   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:44.331921   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:44.375147   70393 cri.go:89] found id: ""
	I0528 21:52:44.375175   70393 logs.go:276] 0 containers: []
	W0528 21:52:44.375185   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:44.375193   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:44.375254   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:44.411860   70393 cri.go:89] found id: ""
	I0528 21:52:44.411889   70393 logs.go:276] 0 containers: []
	W0528 21:52:44.411900   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:44.411908   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:44.411973   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:44.445405   70393 cri.go:89] found id: ""
	I0528 21:52:44.445427   70393 logs.go:276] 0 containers: []
	W0528 21:52:44.445434   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:44.445442   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:44.445453   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:44.495782   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:44.495816   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:44.508556   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:44.508579   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:44.579462   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:44.579488   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:44.579503   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:44.658185   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:44.658210   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:47.198620   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:47.211546   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:47.211604   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:47.247622   70393 cri.go:89] found id: ""
	I0528 21:52:47.247647   70393 logs.go:276] 0 containers: []
	W0528 21:52:47.247656   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:47.247667   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:47.247726   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:47.284789   70393 cri.go:89] found id: ""
	I0528 21:52:47.284821   70393 logs.go:276] 0 containers: []
	W0528 21:52:47.284831   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:47.284839   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:47.284899   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:47.320623   70393 cri.go:89] found id: ""
	I0528 21:52:47.320652   70393 logs.go:276] 0 containers: []
	W0528 21:52:47.320662   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:47.320669   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:47.320730   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:47.354451   70393 cri.go:89] found id: ""
	I0528 21:52:47.354478   70393 logs.go:276] 0 containers: []
	W0528 21:52:47.354488   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:47.354495   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:47.354551   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:47.392746   70393 cri.go:89] found id: ""
	I0528 21:52:47.392769   70393 logs.go:276] 0 containers: []
	W0528 21:52:47.392779   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:47.392786   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:47.392839   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:47.426534   70393 cri.go:89] found id: ""
	I0528 21:52:47.426557   70393 logs.go:276] 0 containers: []
	W0528 21:52:47.426566   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:47.426574   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:47.426633   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:47.463611   70393 cri.go:89] found id: ""
	I0528 21:52:47.463636   70393 logs.go:276] 0 containers: []
	W0528 21:52:47.463644   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:47.463649   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:47.463708   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:47.496480   70393 cri.go:89] found id: ""
	I0528 21:52:47.496502   70393 logs.go:276] 0 containers: []
	W0528 21:52:47.496510   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:47.496518   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:47.496529   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:47.509189   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:47.509212   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:47.575913   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:47.575933   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:47.575949   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:47.654773   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:47.654805   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:47.692857   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:47.692896   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:50.250030   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:50.263592   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:50.263658   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:50.304936   70393 cri.go:89] found id: ""
	I0528 21:52:50.304964   70393 logs.go:276] 0 containers: []
	W0528 21:52:50.304972   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:50.304978   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:50.305041   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:50.345195   70393 cri.go:89] found id: ""
	I0528 21:52:50.345227   70393 logs.go:276] 0 containers: []
	W0528 21:52:50.345235   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:50.345240   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:50.345288   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:50.377750   70393 cri.go:89] found id: ""
	I0528 21:52:50.377791   70393 logs.go:276] 0 containers: []
	W0528 21:52:50.377802   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:50.377809   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:50.377869   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:50.414227   70393 cri.go:89] found id: ""
	I0528 21:52:50.414254   70393 logs.go:276] 0 containers: []
	W0528 21:52:50.414265   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:50.414274   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:50.414333   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:50.449699   70393 cri.go:89] found id: ""
	I0528 21:52:50.449723   70393 logs.go:276] 0 containers: []
	W0528 21:52:50.449730   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:50.449736   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:50.449806   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:50.482916   70393 cri.go:89] found id: ""
	I0528 21:52:50.482942   70393 logs.go:276] 0 containers: []
	W0528 21:52:50.482949   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:50.482955   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:50.483003   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:50.516180   70393 cri.go:89] found id: ""
	I0528 21:52:50.516200   70393 logs.go:276] 0 containers: []
	W0528 21:52:50.516207   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:50.516213   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:50.516266   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:50.550988   70393 cri.go:89] found id: ""
	I0528 21:52:50.551015   70393 logs.go:276] 0 containers: []
	W0528 21:52:50.551027   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:50.551036   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:50.551050   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:50.634910   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:50.634944   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:50.673165   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:50.673194   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:50.724459   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:50.724492   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:50.738031   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:50.738058   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:50.812231   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:53.312795   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:53.327089   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:53.327152   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:53.362855   70393 cri.go:89] found id: ""
	I0528 21:52:53.362881   70393 logs.go:276] 0 containers: []
	W0528 21:52:53.362892   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:53.362900   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:53.362958   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:53.398503   70393 cri.go:89] found id: ""
	I0528 21:52:53.398532   70393 logs.go:276] 0 containers: []
	W0528 21:52:53.398543   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:53.398550   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:53.398614   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:53.432528   70393 cri.go:89] found id: ""
	I0528 21:52:53.432557   70393 logs.go:276] 0 containers: []
	W0528 21:52:53.432569   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:53.432580   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:53.432635   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:53.468674   70393 cri.go:89] found id: ""
	I0528 21:52:53.468702   70393 logs.go:276] 0 containers: []
	W0528 21:52:53.468712   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:53.468721   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:53.468779   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:53.502864   70393 cri.go:89] found id: ""
	I0528 21:52:53.502887   70393 logs.go:276] 0 containers: []
	W0528 21:52:53.502896   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:53.502901   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:53.502957   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:53.539655   70393 cri.go:89] found id: ""
	I0528 21:52:53.539675   70393 logs.go:276] 0 containers: []
	W0528 21:52:53.539682   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:53.539687   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:53.539740   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:53.575942   70393 cri.go:89] found id: ""
	I0528 21:52:53.575968   70393 logs.go:276] 0 containers: []
	W0528 21:52:53.575988   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:53.575996   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:53.576049   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:53.613045   70393 cri.go:89] found id: ""
	I0528 21:52:53.613066   70393 logs.go:276] 0 containers: []
	W0528 21:52:53.613073   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:53.613081   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:53.613093   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:53.667001   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:53.667031   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:53.680999   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:53.681027   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:53.751810   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:53.751831   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:53.751842   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:53.836417   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:53.836448   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:56.376203   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:56.391891   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:56.391950   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:56.424337   70393 cri.go:89] found id: ""
	I0528 21:52:56.424361   70393 logs.go:276] 0 containers: []
	W0528 21:52:56.424369   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:56.424374   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:56.424431   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:56.458016   70393 cri.go:89] found id: ""
	I0528 21:52:56.458041   70393 logs.go:276] 0 containers: []
	W0528 21:52:56.458049   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:56.458054   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:56.458117   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:56.493258   70393 cri.go:89] found id: ""
	I0528 21:52:56.493283   70393 logs.go:276] 0 containers: []
	W0528 21:52:56.493291   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:56.493297   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:56.493355   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:56.526108   70393 cri.go:89] found id: ""
	I0528 21:52:56.526129   70393 logs.go:276] 0 containers: []
	W0528 21:52:56.526137   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:56.526142   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:56.526199   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:56.558406   70393 cri.go:89] found id: ""
	I0528 21:52:56.558434   70393 logs.go:276] 0 containers: []
	W0528 21:52:56.558445   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:56.558453   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:56.558506   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:56.591712   70393 cri.go:89] found id: ""
	I0528 21:52:56.591740   70393 logs.go:276] 0 containers: []
	W0528 21:52:56.591748   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:56.591754   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:56.591812   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:56.626329   70393 cri.go:89] found id: ""
	I0528 21:52:56.626356   70393 logs.go:276] 0 containers: []
	W0528 21:52:56.626368   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:56.626375   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:56.626440   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:56.660439   70393 cri.go:89] found id: ""
	I0528 21:52:56.660463   70393 logs.go:276] 0 containers: []
	W0528 21:52:56.660473   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:56.660483   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:56.660496   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:56.712799   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:56.712824   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:52:56.726056   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:56.726077   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:56.812142   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:56.812167   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:56.812182   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:56.900233   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:56.900269   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:59.438200   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:52:59.452205   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:52:59.452277   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:52:59.487788   70393 cri.go:89] found id: ""
	I0528 21:52:59.487824   70393 logs.go:276] 0 containers: []
	W0528 21:52:59.487834   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:52:59.487842   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:52:59.487911   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:52:59.522617   70393 cri.go:89] found id: ""
	I0528 21:52:59.522640   70393 logs.go:276] 0 containers: []
	W0528 21:52:59.522651   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:52:59.522664   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:52:59.522718   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:52:59.556981   70393 cri.go:89] found id: ""
	I0528 21:52:59.557009   70393 logs.go:276] 0 containers: []
	W0528 21:52:59.557019   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:52:59.557026   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:52:59.557075   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:52:59.589980   70393 cri.go:89] found id: ""
	I0528 21:52:59.590022   70393 logs.go:276] 0 containers: []
	W0528 21:52:59.590034   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:52:59.590041   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:52:59.590122   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:52:59.623071   70393 cri.go:89] found id: ""
	I0528 21:52:59.623100   70393 logs.go:276] 0 containers: []
	W0528 21:52:59.623111   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:52:59.623118   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:52:59.623173   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:52:59.656835   70393 cri.go:89] found id: ""
	I0528 21:52:59.656859   70393 logs.go:276] 0 containers: []
	W0528 21:52:59.656866   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:52:59.656871   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:52:59.656921   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:52:59.690097   70393 cri.go:89] found id: ""
	I0528 21:52:59.690122   70393 logs.go:276] 0 containers: []
	W0528 21:52:59.690131   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:52:59.690145   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:52:59.690187   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:52:59.722132   70393 cri.go:89] found id: ""
	I0528 21:52:59.722158   70393 logs.go:276] 0 containers: []
	W0528 21:52:59.722169   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:52:59.722181   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:52:59.722208   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:52:59.803523   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:52:59.803548   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:52:59.803561   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:52:59.878521   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:52:59.878553   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:52:59.918678   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:52:59.918703   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:52:59.968544   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:52:59.968576   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:02.482574   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:53:02.496455   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:53:02.496531   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:53:02.553708   70393 cri.go:89] found id: ""
	I0528 21:53:02.553735   70393 logs.go:276] 0 containers: []
	W0528 21:53:02.553745   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:53:02.553753   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:53:02.553828   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:53:02.586895   70393 cri.go:89] found id: ""
	I0528 21:53:02.586918   70393 logs.go:276] 0 containers: []
	W0528 21:53:02.586925   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:53:02.586930   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:53:02.586984   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:53:02.620580   70393 cri.go:89] found id: ""
	I0528 21:53:02.620604   70393 logs.go:276] 0 containers: []
	W0528 21:53:02.620610   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:53:02.620616   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:53:02.620675   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:53:02.652369   70393 cri.go:89] found id: ""
	I0528 21:53:02.652399   70393 logs.go:276] 0 containers: []
	W0528 21:53:02.652410   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:53:02.652418   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:53:02.652478   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:53:02.695081   70393 cri.go:89] found id: ""
	I0528 21:53:02.695108   70393 logs.go:276] 0 containers: []
	W0528 21:53:02.695118   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:53:02.695125   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:53:02.695183   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:53:02.728712   70393 cri.go:89] found id: ""
	I0528 21:53:02.728745   70393 logs.go:276] 0 containers: []
	W0528 21:53:02.728754   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:53:02.728760   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:53:02.728822   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:53:02.762782   70393 cri.go:89] found id: ""
	I0528 21:53:02.762808   70393 logs.go:276] 0 containers: []
	W0528 21:53:02.762818   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:53:02.762825   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:53:02.762883   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:53:02.799240   70393 cri.go:89] found id: ""
	I0528 21:53:02.799273   70393 logs.go:276] 0 containers: []
	W0528 21:53:02.799284   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:53:02.799296   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:02.799309   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:02.849796   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:02.849828   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:02.863076   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:02.863102   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:53:02.930715   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:53:02.930738   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:02.930754   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:03.009924   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:53:03.009960   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:05.549987   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:53:05.563115   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:53:05.563180   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:53:05.604618   70393 cri.go:89] found id: ""
	I0528 21:53:05.604642   70393 logs.go:276] 0 containers: []
	W0528 21:53:05.604652   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:53:05.604659   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:53:05.604723   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:53:05.639129   70393 cri.go:89] found id: ""
	I0528 21:53:05.639152   70393 logs.go:276] 0 containers: []
	W0528 21:53:05.639159   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:53:05.639164   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:53:05.639217   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:53:05.672928   70393 cri.go:89] found id: ""
	I0528 21:53:05.672958   70393 logs.go:276] 0 containers: []
	W0528 21:53:05.672969   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:53:05.672976   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:53:05.673037   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:53:05.708839   70393 cri.go:89] found id: ""
	I0528 21:53:05.708867   70393 logs.go:276] 0 containers: []
	W0528 21:53:05.708877   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:53:05.708884   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:53:05.708946   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:53:05.744626   70393 cri.go:89] found id: ""
	I0528 21:53:05.744662   70393 logs.go:276] 0 containers: []
	W0528 21:53:05.744671   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:53:05.744679   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:53:05.744737   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:53:05.786097   70393 cri.go:89] found id: ""
	I0528 21:53:05.786124   70393 logs.go:276] 0 containers: []
	W0528 21:53:05.786131   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:53:05.786137   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:53:05.786190   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:53:05.820866   70393 cri.go:89] found id: ""
	I0528 21:53:05.820891   70393 logs.go:276] 0 containers: []
	W0528 21:53:05.820898   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:53:05.820904   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:53:05.820955   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:53:05.854473   70393 cri.go:89] found id: ""
	I0528 21:53:05.854500   70393 logs.go:276] 0 containers: []
	W0528 21:53:05.854509   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:53:05.854523   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:05.854539   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:05.867488   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:05.867511   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:53:05.938417   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:53:05.938445   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:05.938460   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:06.017325   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:53:06.017357   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:06.056747   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:06.056775   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:08.609539   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:53:08.624030   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:53:08.624087   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:53:08.659600   70393 cri.go:89] found id: ""
	I0528 21:53:08.659632   70393 logs.go:276] 0 containers: []
	W0528 21:53:08.659646   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:53:08.659655   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:53:08.659717   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:53:08.693138   70393 cri.go:89] found id: ""
	I0528 21:53:08.693164   70393 logs.go:276] 0 containers: []
	W0528 21:53:08.693174   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:53:08.693182   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:53:08.693242   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:53:08.731010   70393 cri.go:89] found id: ""
	I0528 21:53:08.731030   70393 logs.go:276] 0 containers: []
	W0528 21:53:08.731037   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:53:08.731048   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:53:08.731100   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:53:08.775200   70393 cri.go:89] found id: ""
	I0528 21:53:08.775225   70393 logs.go:276] 0 containers: []
	W0528 21:53:08.775235   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:53:08.775242   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:53:08.775303   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:53:08.812308   70393 cri.go:89] found id: ""
	I0528 21:53:08.812337   70393 logs.go:276] 0 containers: []
	W0528 21:53:08.812348   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:53:08.812355   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:53:08.812412   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:53:08.846613   70393 cri.go:89] found id: ""
	I0528 21:53:08.846642   70393 logs.go:276] 0 containers: []
	W0528 21:53:08.846653   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:53:08.846660   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:53:08.846733   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:53:08.879990   70393 cri.go:89] found id: ""
	I0528 21:53:08.880015   70393 logs.go:276] 0 containers: []
	W0528 21:53:08.880022   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:53:08.880029   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:53:08.880084   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:53:08.921279   70393 cri.go:89] found id: ""
	I0528 21:53:08.921306   70393 logs.go:276] 0 containers: []
	W0528 21:53:08.921316   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:53:08.921323   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:08.921335   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:53:09.025280   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:53:09.025300   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:09.025312   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:09.106176   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:53:09.106214   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:09.149631   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:09.149655   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:09.203978   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:09.204008   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:11.718829   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:53:11.731758   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:53:11.731819   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:53:11.778803   70393 cri.go:89] found id: ""
	I0528 21:53:11.778831   70393 logs.go:276] 0 containers: []
	W0528 21:53:11.778842   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:53:11.778848   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:53:11.778929   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:53:11.815069   70393 cri.go:89] found id: ""
	I0528 21:53:11.815097   70393 logs.go:276] 0 containers: []
	W0528 21:53:11.815107   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:53:11.815112   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:53:11.815160   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:53:11.850828   70393 cri.go:89] found id: ""
	I0528 21:53:11.850856   70393 logs.go:276] 0 containers: []
	W0528 21:53:11.850865   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:53:11.850873   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:53:11.850925   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:53:11.884850   70393 cri.go:89] found id: ""
	I0528 21:53:11.884877   70393 logs.go:276] 0 containers: []
	W0528 21:53:11.884886   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:53:11.884893   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:53:11.884951   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:53:11.920043   70393 cri.go:89] found id: ""
	I0528 21:53:11.920067   70393 logs.go:276] 0 containers: []
	W0528 21:53:11.920075   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:53:11.920081   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:53:11.920134   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:53:11.956131   70393 cri.go:89] found id: ""
	I0528 21:53:11.956156   70393 logs.go:276] 0 containers: []
	W0528 21:53:11.956163   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:53:11.956169   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:53:11.956235   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:53:11.988525   70393 cri.go:89] found id: ""
	I0528 21:53:11.988553   70393 logs.go:276] 0 containers: []
	W0528 21:53:11.988575   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:53:11.988582   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:53:11.988635   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:53:12.028538   70393 cri.go:89] found id: ""
	I0528 21:53:12.028563   70393 logs.go:276] 0 containers: []
	W0528 21:53:12.028572   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:53:12.028580   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:53:12.028595   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:12.070768   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:12.070801   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:12.122933   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:12.122960   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:12.136290   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:12.136315   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:53:12.204850   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:53:12.204873   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:12.204889   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:14.787518   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:53:14.802132   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:53:14.802197   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:53:14.837087   70393 cri.go:89] found id: ""
	I0528 21:53:14.837113   70393 logs.go:276] 0 containers: []
	W0528 21:53:14.837125   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:53:14.837135   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:53:14.837195   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:53:14.874063   70393 cri.go:89] found id: ""
	I0528 21:53:14.874090   70393 logs.go:276] 0 containers: []
	W0528 21:53:14.874102   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:53:14.874109   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:53:14.874172   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:53:14.907903   70393 cri.go:89] found id: ""
	I0528 21:53:14.907932   70393 logs.go:276] 0 containers: []
	W0528 21:53:14.907944   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:53:14.907952   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:53:14.908010   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:53:14.941279   70393 cri.go:89] found id: ""
	I0528 21:53:14.941311   70393 logs.go:276] 0 containers: []
	W0528 21:53:14.941322   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:53:14.941330   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:53:14.941391   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:53:14.975517   70393 cri.go:89] found id: ""
	I0528 21:53:14.975544   70393 logs.go:276] 0 containers: []
	W0528 21:53:14.975551   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:53:14.975557   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:53:14.975619   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:53:15.012922   70393 cri.go:89] found id: ""
	I0528 21:53:15.012952   70393 logs.go:276] 0 containers: []
	W0528 21:53:15.012963   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:53:15.012971   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:53:15.013038   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:53:15.046113   70393 cri.go:89] found id: ""
	I0528 21:53:15.046140   70393 logs.go:276] 0 containers: []
	W0528 21:53:15.046151   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:53:15.046158   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:53:15.046213   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:53:15.080470   70393 cri.go:89] found id: ""
	I0528 21:53:15.080492   70393 logs.go:276] 0 containers: []
	W0528 21:53:15.080499   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:53:15.080507   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:15.080521   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:53:15.162265   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:53:15.162290   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:15.162312   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:15.235707   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:53:15.235736   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:15.275633   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:15.275660   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:15.325977   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:15.326006   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:17.839800   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:53:17.852917   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:53:17.852988   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:53:17.885793   70393 cri.go:89] found id: ""
	I0528 21:53:17.885821   70393 logs.go:276] 0 containers: []
	W0528 21:53:17.885828   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:53:17.885834   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:53:17.885893   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:53:17.917156   70393 cri.go:89] found id: ""
	I0528 21:53:17.917178   70393 logs.go:276] 0 containers: []
	W0528 21:53:17.917185   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:53:17.917191   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:53:17.917241   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:53:17.951563   70393 cri.go:89] found id: ""
	I0528 21:53:17.951595   70393 logs.go:276] 0 containers: []
	W0528 21:53:17.951608   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:53:17.951617   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:53:17.951673   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:53:17.987562   70393 cri.go:89] found id: ""
	I0528 21:53:17.987590   70393 logs.go:276] 0 containers: []
	W0528 21:53:17.987600   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:53:17.987608   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:53:17.987667   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:53:18.025276   70393 cri.go:89] found id: ""
	I0528 21:53:18.025301   70393 logs.go:276] 0 containers: []
	W0528 21:53:18.025308   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:53:18.025319   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:53:18.025367   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:53:18.062960   70393 cri.go:89] found id: ""
	I0528 21:53:18.062991   70393 logs.go:276] 0 containers: []
	W0528 21:53:18.063003   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:53:18.063019   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:53:18.063078   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:53:18.096529   70393 cri.go:89] found id: ""
	I0528 21:53:18.096562   70393 logs.go:276] 0 containers: []
	W0528 21:53:18.096574   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:53:18.096581   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:53:18.096640   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:53:18.132095   70393 cri.go:89] found id: ""
	I0528 21:53:18.132124   70393 logs.go:276] 0 containers: []
	W0528 21:53:18.132135   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:53:18.132146   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:18.132161   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:18.183584   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:18.183621   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:18.197635   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:18.197663   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:53:18.271992   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:53:18.272017   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:18.272032   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:18.354586   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:53:18.354630   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:20.896697   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:53:20.910643   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:53:20.910724   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:53:20.948328   70393 cri.go:89] found id: ""
	I0528 21:53:20.948352   70393 logs.go:276] 0 containers: []
	W0528 21:53:20.948359   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:53:20.948365   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:53:20.948447   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:53:20.984113   70393 cri.go:89] found id: ""
	I0528 21:53:20.984140   70393 logs.go:276] 0 containers: []
	W0528 21:53:20.984152   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:53:20.984159   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:53:20.984223   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:53:21.019437   70393 cri.go:89] found id: ""
	I0528 21:53:21.019461   70393 logs.go:276] 0 containers: []
	W0528 21:53:21.019469   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:53:21.019474   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:53:21.019530   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:53:21.053258   70393 cri.go:89] found id: ""
	I0528 21:53:21.053282   70393 logs.go:276] 0 containers: []
	W0528 21:53:21.053291   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:53:21.053299   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:53:21.053360   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:53:21.090966   70393 cri.go:89] found id: ""
	I0528 21:53:21.090996   70393 logs.go:276] 0 containers: []
	W0528 21:53:21.091005   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:53:21.091012   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:53:21.091071   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:53:21.125183   70393 cri.go:89] found id: ""
	I0528 21:53:21.125211   70393 logs.go:276] 0 containers: []
	W0528 21:53:21.125218   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:53:21.125224   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:53:21.125280   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:53:21.163082   70393 cri.go:89] found id: ""
	I0528 21:53:21.163106   70393 logs.go:276] 0 containers: []
	W0528 21:53:21.163188   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:53:21.163220   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:53:21.163327   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:53:21.204140   70393 cri.go:89] found id: ""
	I0528 21:53:21.204163   70393 logs.go:276] 0 containers: []
	W0528 21:53:21.204170   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:53:21.204184   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:21.204201   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:53:21.287772   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:53:21.287794   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:21.287806   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:21.368663   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:53:21.368700   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:21.407552   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:21.407583   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:21.458457   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:21.458496   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:23.972021   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:53:23.985397   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:53:23.985456   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:53:24.022870   70393 cri.go:89] found id: ""
	I0528 21:53:24.022892   70393 logs.go:276] 0 containers: []
	W0528 21:53:24.022899   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:53:24.022908   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:53:24.022957   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:53:24.062888   70393 cri.go:89] found id: ""
	I0528 21:53:24.062911   70393 logs.go:276] 0 containers: []
	W0528 21:53:24.062918   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:53:24.062923   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:53:24.062989   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:53:24.109664   70393 cri.go:89] found id: ""
	I0528 21:53:24.109687   70393 logs.go:276] 0 containers: []
	W0528 21:53:24.109693   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:53:24.109699   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:53:24.109749   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:53:24.151608   70393 cri.go:89] found id: ""
	I0528 21:53:24.151630   70393 logs.go:276] 0 containers: []
	W0528 21:53:24.151638   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:53:24.151644   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:53:24.151690   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:53:24.191653   70393 cri.go:89] found id: ""
	I0528 21:53:24.191675   70393 logs.go:276] 0 containers: []
	W0528 21:53:24.191682   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:53:24.191687   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:53:24.191733   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:53:24.223060   70393 cri.go:89] found id: ""
	I0528 21:53:24.223097   70393 logs.go:276] 0 containers: []
	W0528 21:53:24.223108   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:53:24.223115   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:53:24.223176   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:53:24.260770   70393 cri.go:89] found id: ""
	I0528 21:53:24.260793   70393 logs.go:276] 0 containers: []
	W0528 21:53:24.260800   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:53:24.260806   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:53:24.260857   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:53:24.296049   70393 cri.go:89] found id: ""
	I0528 21:53:24.296074   70393 logs.go:276] 0 containers: []
	W0528 21:53:24.296081   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:53:24.296089   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:24.296100   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:53:24.371891   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:53:24.371909   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:24.371924   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:24.446254   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:53:24.446288   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:24.486578   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:24.486609   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:24.539368   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:24.539392   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:27.053366   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:53:27.067656   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:53:27.067735   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:53:27.103967   70393 cri.go:89] found id: ""
	I0528 21:53:27.103993   70393 logs.go:276] 0 containers: []
	W0528 21:53:27.104005   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:53:27.104012   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:53:27.104069   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:53:27.141635   70393 cri.go:89] found id: ""
	I0528 21:53:27.141669   70393 logs.go:276] 0 containers: []
	W0528 21:53:27.141680   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:53:27.141687   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:53:27.141745   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:53:27.183261   70393 cri.go:89] found id: ""
	I0528 21:53:27.183285   70393 logs.go:276] 0 containers: []
	W0528 21:53:27.183296   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:53:27.183303   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:53:27.183368   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:53:27.217662   70393 cri.go:89] found id: ""
	I0528 21:53:27.217681   70393 logs.go:276] 0 containers: []
	W0528 21:53:27.217689   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:53:27.217694   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:53:27.217742   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:53:27.261523   70393 cri.go:89] found id: ""
	I0528 21:53:27.261550   70393 logs.go:276] 0 containers: []
	W0528 21:53:27.261558   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:53:27.261568   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:53:27.261629   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:53:27.304743   70393 cri.go:89] found id: ""
	I0528 21:53:27.304770   70393 logs.go:276] 0 containers: []
	W0528 21:53:27.304777   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:53:27.304783   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:53:27.304845   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:53:27.341351   70393 cri.go:89] found id: ""
	I0528 21:53:27.341380   70393 logs.go:276] 0 containers: []
	W0528 21:53:27.341392   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:53:27.341399   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:53:27.341462   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:53:27.377737   70393 cri.go:89] found id: ""
	I0528 21:53:27.377780   70393 logs.go:276] 0 containers: []
	W0528 21:53:27.377791   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:53:27.377803   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:27.377817   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:27.427319   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:27.427351   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:27.441087   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:27.441113   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:53:27.513870   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:53:27.513892   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:27.513903   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:27.593149   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:53:27.593181   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:30.133249   70393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:53:30.146609   70393 kubeadm.go:591] duration metric: took 4m3.262056702s to restartPrimaryControlPlane
	W0528 21:53:30.146690   70393 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0528 21:53:30.146720   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0528 21:53:31.319146   70393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.172393857s)
	I0528 21:53:31.319228   70393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:53:31.334407   70393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:53:31.345164   70393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:53:31.355541   70393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:53:31.355563   70393 kubeadm.go:156] found existing configuration files:
	
	I0528 21:53:31.355618   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:53:31.365455   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:53:31.365519   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:53:31.375655   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:53:31.386354   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:53:31.386404   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:53:31.396532   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:53:31.406473   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:53:31.406534   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:53:31.416383   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:53:31.428829   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:53:31.428889   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:53:31.441290   70393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 21:53:31.532038   70393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0528 21:53:31.532108   70393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 21:53:31.690684   70393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 21:53:31.690794   70393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 21:53:31.690902   70393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 21:53:31.889507   70393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 21:53:31.891739   70393 out.go:204]   - Generating certificates and keys ...
	I0528 21:53:31.891841   70393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 21:53:31.891928   70393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 21:53:31.892026   70393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 21:53:31.892116   70393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0528 21:53:31.892260   70393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0528 21:53:31.892341   70393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0528 21:53:31.892427   70393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0528 21:53:31.892533   70393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0528 21:53:31.892745   70393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 21:53:31.893250   70393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 21:53:31.893356   70393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0528 21:53:31.893405   70393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 21:53:31.980092   70393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 21:53:32.272284   70393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 21:53:32.451325   70393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 21:53:32.651993   70393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 21:53:32.666070   70393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 21:53:32.667353   70393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 21:53:32.667401   70393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 21:53:32.813218   70393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 21:53:32.815077   70393 out.go:204]   - Booting up control plane ...
	I0528 21:53:32.815200   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 21:53:32.819678   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 21:53:32.821209   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 21:53:32.821739   70393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 21:53:32.827066   70393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0528 21:54:12.825536   70393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:54:12.825810   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:12.826159   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:17.826706   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:17.826945   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:27.827370   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:27.827610   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:47.828383   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:47.828686   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:55:27.830110   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:55:27.830377   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:55:27.830409   70393 kubeadm.go:309] 
	I0528 21:55:27.830460   70393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:55:27.830496   70393 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:55:27.830504   70393 kubeadm.go:309] 
	I0528 21:55:27.830563   70393 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:55:27.830629   70393 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:55:27.830806   70393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:55:27.830833   70393 kubeadm.go:309] 
	I0528 21:55:27.830939   70393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:55:27.830970   70393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:55:27.830999   70393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:55:27.831006   70393 kubeadm.go:309] 
	I0528 21:55:27.831089   70393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:55:27.831161   70393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:55:27.831168   70393 kubeadm.go:309] 
	I0528 21:55:27.831276   70393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:55:27.831396   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:55:27.831491   70393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:55:27.831586   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:55:27.831597   70393 kubeadm.go:309] 
	I0528 21:55:27.832385   70393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:55:27.832478   70393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:55:27.832569   70393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0528 21:55:27.832707   70393 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0528 21:55:27.832768   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0528 21:55:28.286592   70393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:55:28.301095   70393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:55:28.310856   70393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:55:28.310875   70393 kubeadm.go:156] found existing configuration files:
	
	I0528 21:55:28.310916   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:55:28.319713   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:55:28.319757   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:55:28.328964   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:55:28.337404   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:55:28.337456   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:55:28.346480   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:55:28.355427   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:55:28.355475   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:55:28.364843   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:55:28.373821   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:55:28.373874   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:55:28.382542   70393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 21:55:28.448539   70393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0528 21:55:28.448744   70393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 21:55:28.592911   70393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 21:55:28.593029   70393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 21:55:28.593137   70393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 21:55:28.793805   70393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 21:55:28.795709   70393 out.go:204]   - Generating certificates and keys ...
	I0528 21:55:28.795786   70393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 21:55:28.795854   70393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 21:55:28.795959   70393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 21:55:28.796055   70393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0528 21:55:28.796153   70393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0528 21:55:28.796349   70393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0528 21:55:28.796467   70393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0528 21:55:28.796537   70393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0528 21:55:28.796610   70393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 21:55:28.796721   70393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 21:55:28.796768   70393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0528 21:55:28.796847   70393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 21:55:28.946885   70393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 21:55:29.128640   70393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 21:55:29.240490   70393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 21:55:29.542128   70393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 21:55:29.563784   70393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 21:55:29.565927   70393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 21:55:29.566159   70393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 21:55:29.711517   70393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 21:55:29.713311   70393 out.go:204]   - Booting up control plane ...
	I0528 21:55:29.713420   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 21:55:29.717970   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 21:55:29.718779   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 21:55:29.719429   70393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 21:55:29.722781   70393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0528 21:56:09.724902   70393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:56:09.725334   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:09.725557   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:14.726408   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:14.726667   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:24.727314   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:24.727592   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:44.728635   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:44.728954   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:57:24.729385   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:57:24.729659   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:57:24.729688   70393 kubeadm.go:309] 
	I0528 21:57:24.729745   70393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:57:24.729835   70393 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:57:24.729856   70393 kubeadm.go:309] 
	I0528 21:57:24.729898   70393 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:57:24.729930   70393 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:57:24.730023   70393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:57:24.730030   70393 kubeadm.go:309] 
	I0528 21:57:24.730156   70393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:57:24.730212   70393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:57:24.730267   70393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:57:24.730278   70393 kubeadm.go:309] 
	I0528 21:57:24.730403   70393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:57:24.730522   70393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:57:24.730533   70393 kubeadm.go:309] 
	I0528 21:57:24.730669   70393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:57:24.730788   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:57:24.730899   70393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:57:24.731020   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:57:24.731039   70393 kubeadm.go:309] 
	I0528 21:57:24.731657   70393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:57:24.731752   70393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:57:24.731861   70393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0528 21:57:24.731942   70393 kubeadm.go:393] duration metric: took 7m57.905523124s to StartCluster
	I0528 21:57:24.731997   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:57:24.732064   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:57:24.772889   70393 cri.go:89] found id: ""
	I0528 21:57:24.772916   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.772923   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:57:24.772929   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:57:24.772988   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:57:24.806418   70393 cri.go:89] found id: ""
	I0528 21:57:24.806447   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.806458   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:57:24.806467   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:57:24.806534   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:57:24.844994   70393 cri.go:89] found id: ""
	I0528 21:57:24.845020   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.845028   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:57:24.845035   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:57:24.845098   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:57:24.880517   70393 cri.go:89] found id: ""
	I0528 21:57:24.880547   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.880558   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:57:24.880566   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:57:24.880615   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:57:24.917534   70393 cri.go:89] found id: ""
	I0528 21:57:24.917561   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.917569   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:57:24.917575   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:57:24.917624   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:57:24.952898   70393 cri.go:89] found id: ""
	I0528 21:57:24.952929   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.952940   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:57:24.952948   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:57:24.953011   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:57:24.994957   70393 cri.go:89] found id: ""
	I0528 21:57:24.994983   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.994990   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:57:24.994996   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:57:24.995046   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:57:25.032594   70393 cri.go:89] found id: ""
	I0528 21:57:25.032617   70393 logs.go:276] 0 containers: []
	W0528 21:57:25.032624   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:57:25.032633   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:57:25.032645   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:57:25.112858   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:57:25.112882   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:57:25.112894   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:57:25.217748   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:57:25.217792   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:57:25.289998   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:57:25.290035   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:57:25.344833   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:57:25.344868   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0528 21:57:25.360547   70393 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0528 21:57:25.360594   70393 out.go:239] * 
	* 
	W0528 21:57:25.360659   70393 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:57:25.360693   70393 out.go:239] * 
	* 
	W0528 21:57:25.361545   70393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 21:57:25.365387   70393 out.go:177] 
	W0528 21:57:25.366681   70393 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:57:25.366731   70393 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0528 21:57:25.366772   70393 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0528 21:57:25.369011   70393 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-499466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466: exit status 2 (221.415625ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-499466 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-290122             | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-595279            | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-499466        | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-290122                  | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-595279                 | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-499466             | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-257793                              | cert-expiration-257793       | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807140 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	|         | disable-driver-mounts-807140                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:50 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-249165  | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC | 28 May 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC |                     |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-249165       | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC |                     |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:53:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:53:40.744358   73188 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:53:40.744653   73188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:53:40.744664   73188 out.go:304] Setting ErrFile to fd 2...
	I0528 21:53:40.744668   73188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:53:40.744923   73188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:53:40.745490   73188 out.go:298] Setting JSON to false
	I0528 21:53:40.746663   73188 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5764,"bootTime":1716927457,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:53:40.746723   73188 start.go:139] virtualization: kvm guest
	I0528 21:53:40.749013   73188 out.go:177] * [default-k8s-diff-port-249165] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:53:40.750611   73188 notify.go:220] Checking for updates...
	I0528 21:53:40.750618   73188 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:53:40.752116   73188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:53:40.753384   73188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:53:40.754612   73188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:53:40.755846   73188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:53:40.756972   73188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:53:40.758627   73188 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:53:40.759050   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.759106   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.774337   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I0528 21:53:40.774754   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.775318   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.775344   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.775633   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.775791   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.776007   73188 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:53:40.776327   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.776382   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.790531   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I0528 21:53:40.790970   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.791471   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.791498   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.791802   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.791983   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.826633   73188 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:53:40.827847   73188 start.go:297] selected driver: kvm2
	I0528 21:53:40.827863   73188 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:53:40.827981   73188 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:53:40.828705   73188 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:53:40.828777   73188 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:53:40.844223   73188 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:53:40.844574   73188 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:53:40.844638   73188 cni.go:84] Creating CNI manager for ""
	I0528 21:53:40.844650   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:53:40.844682   73188 start.go:340] cluster config:
	{Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:53:40.844775   73188 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:53:40.846544   73188 out.go:177] * Starting "default-k8s-diff-port-249165" primary control-plane node in "default-k8s-diff-port-249165" cluster
	I0528 21:53:40.847754   73188 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:53:40.847792   73188 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 21:53:40.847801   73188 cache.go:56] Caching tarball of preloaded images
	I0528 21:53:40.847870   73188 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:53:40.847880   73188 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 21:53:40.847964   73188 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/config.json ...
	I0528 21:53:40.848196   73188 start.go:360] acquireMachinesLock for default-k8s-diff-port-249165: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:53:40.848256   73188 start.go:364] duration metric: took 38.994µs to acquireMachinesLock for "default-k8s-diff-port-249165"
	I0528 21:53:40.848271   73188 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:53:40.848281   73188 fix.go:54] fixHost starting: 
	I0528 21:53:40.848534   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.848571   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.863227   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0528 21:53:40.863708   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.864162   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.864182   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.864616   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.864794   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.864952   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 21:53:40.866583   73188 fix.go:112] recreateIfNeeded on default-k8s-diff-port-249165: state=Running err=<nil>
	W0528 21:53:40.866600   73188 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:53:40.868382   73188 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-249165" VM ...
	I0528 21:53:38.450836   70002 logs.go:123] Gathering logs for storage-provisioner [9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d] ...
	I0528 21:53:38.450866   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d"
	I0528 21:53:38.485575   70002 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:38.485610   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:38.854290   70002 logs.go:123] Gathering logs for container status ...
	I0528 21:53:38.854325   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:38.902357   70002 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:38.902389   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:38.916785   70002 logs.go:123] Gathering logs for etcd [3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c] ...
	I0528 21:53:38.916820   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c"
	I0528 21:53:38.982119   70002 logs.go:123] Gathering logs for kube-apiserver [056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622] ...
	I0528 21:53:38.982148   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622"
	I0528 21:53:39.031038   70002 logs.go:123] Gathering logs for kube-proxy [cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc] ...
	I0528 21:53:39.031066   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc"
	I0528 21:53:39.068094   70002 logs.go:123] Gathering logs for kube-controller-manager [b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89] ...
	I0528 21:53:39.068123   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89"
	I0528 21:53:39.129214   70002 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:39.129248   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:39.191483   70002 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:39.191523   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:53:41.813698   70002 system_pods.go:59] 8 kube-system pods found
	I0528 21:53:41.813725   70002 system_pods.go:61] "coredns-7db6d8ff4d-8cb7b" [b3908d89-cfc6-4f1a-9aef-861aac0d3e29] Running
	I0528 21:53:41.813730   70002 system_pods.go:61] "etcd-embed-certs-595279" [58581274-a239-4367-926e-c333f201d4f8] Running
	I0528 21:53:41.813733   70002 system_pods.go:61] "kube-apiserver-embed-certs-595279" [cc2dd164-709f-4e59-81bc-ce9d30bbced9] Running
	I0528 21:53:41.813736   70002 system_pods.go:61] "kube-controller-manager-embed-certs-595279" [e049af67-fff8-466f-96ff-81d148602884] Running
	I0528 21:53:41.813739   70002 system_pods.go:61] "kube-proxy-pnl5w" [9c2c68bc-42c2-425e-ae35-a8c07b5d5221] Running
	I0528 21:53:41.813742   70002 system_pods.go:61] "kube-scheduler-embed-certs-595279" [ab6bff2a-7266-4c5c-96bc-c87ef15dd342] Running
	I0528 21:53:41.813748   70002 system_pods.go:61] "metrics-server-569cc877fc-f6fz2" [b5e432cd-3b95-4f20-b9b3-c498512a7564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:53:41.813751   70002 system_pods.go:61] "storage-provisioner" [7bf52279-1fbc-40e5-8376-992c545c55dd] Running
	I0528 21:53:41.813771   70002 system_pods.go:74] duration metric: took 3.894565784s to wait for pod list to return data ...
	I0528 21:53:41.813780   70002 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:53:41.816297   70002 default_sa.go:45] found service account: "default"
	I0528 21:53:41.816319   70002 default_sa.go:55] duration metric: took 2.532587ms for default service account to be created ...
	I0528 21:53:41.816326   70002 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:53:41.821407   70002 system_pods.go:86] 8 kube-system pods found
	I0528 21:53:41.821437   70002 system_pods.go:89] "coredns-7db6d8ff4d-8cb7b" [b3908d89-cfc6-4f1a-9aef-861aac0d3e29] Running
	I0528 21:53:41.821447   70002 system_pods.go:89] "etcd-embed-certs-595279" [58581274-a239-4367-926e-c333f201d4f8] Running
	I0528 21:53:41.821453   70002 system_pods.go:89] "kube-apiserver-embed-certs-595279" [cc2dd164-709f-4e59-81bc-ce9d30bbced9] Running
	I0528 21:53:41.821458   70002 system_pods.go:89] "kube-controller-manager-embed-certs-595279" [e049af67-fff8-466f-96ff-81d148602884] Running
	I0528 21:53:41.821461   70002 system_pods.go:89] "kube-proxy-pnl5w" [9c2c68bc-42c2-425e-ae35-a8c07b5d5221] Running
	I0528 21:53:41.821465   70002 system_pods.go:89] "kube-scheduler-embed-certs-595279" [ab6bff2a-7266-4c5c-96bc-c87ef15dd342] Running
	I0528 21:53:41.821472   70002 system_pods.go:89] "metrics-server-569cc877fc-f6fz2" [b5e432cd-3b95-4f20-b9b3-c498512a7564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:53:41.821480   70002 system_pods.go:89] "storage-provisioner" [7bf52279-1fbc-40e5-8376-992c545c55dd] Running
	I0528 21:53:41.821489   70002 system_pods.go:126] duration metric: took 5.157831ms to wait for k8s-apps to be running ...
	I0528 21:53:41.821498   70002 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:53:41.821538   70002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:53:41.838819   70002 system_svc.go:56] duration metric: took 17.315204ms WaitForService to wait for kubelet
	I0528 21:53:41.838844   70002 kubeadm.go:576] duration metric: took 4m26.419891509s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:53:41.838864   70002 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:53:41.841408   70002 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:53:41.841424   70002 node_conditions.go:123] node cpu capacity is 2
	I0528 21:53:41.841433   70002 node_conditions.go:105] duration metric: took 2.56566ms to run NodePressure ...
	I0528 21:53:41.841445   70002 start.go:240] waiting for startup goroutines ...
	I0528 21:53:41.841452   70002 start.go:245] waiting for cluster config update ...
	I0528 21:53:41.841463   70002 start.go:254] writing updated cluster config ...
	I0528 21:53:41.841709   70002 ssh_runner.go:195] Run: rm -f paused
	I0528 21:53:41.886820   70002 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:53:41.888710   70002 out.go:177] * Done! kubectl is now configured to use "embed-certs-595279" cluster and "default" namespace by default
	I0528 21:53:40.749506   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:43.248909   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:40.869524   73188 machine.go:94] provisionDockerMachine start ...
	I0528 21:53:40.869542   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.869730   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:53:40.872099   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:53:40.872470   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:53:40.872491   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:53:40.872625   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:53:40.872772   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:53:40.872952   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:53:40.873092   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:53:40.873253   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:53:40.873429   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:53:40.873438   73188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:53:43.770029   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:45.748750   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:48.248904   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:46.841982   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:50.249442   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:52.749680   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:52.922023   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:55.251148   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:57.748960   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:55.994071   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:59.749114   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:02.248306   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:02.074025   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:05.145996   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:04.248616   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:06.749284   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:09.247806   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:11.249481   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:13.748196   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:12.825536   70393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:54:12.825810   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:12.826159   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:14.266167   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:15.749468   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:18.248675   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:17.826706   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:17.826945   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:17.338025   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:20.248941   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:22.749284   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:23.417971   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:25.248681   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:27.748556   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:27.827370   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:27.827610   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:26.490049   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:29.748865   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:32.248746   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:32.569987   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:35.641969   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:34.249483   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:36.748835   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:38.749264   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:41.251039   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:43.248816   69886 pod_ready.go:81] duration metric: took 4m0.006582939s for pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace to be "Ready" ...
	E0528 21:54:43.248839   69886 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0528 21:54:43.248847   69886 pod_ready.go:38] duration metric: took 4m4.041932949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:54:43.248863   69886 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:54:43.248889   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:43.248933   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:43.296609   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:43.296630   69886 cri.go:89] found id: ""
	I0528 21:54:43.296638   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:43.296694   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.301171   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:43.301211   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:43.340772   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:43.340793   69886 cri.go:89] found id: ""
	I0528 21:54:43.340799   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:43.340843   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.345422   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:43.345489   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:43.392432   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:43.392458   69886 cri.go:89] found id: ""
	I0528 21:54:43.392467   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:43.392521   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.396870   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:43.396943   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:43.433491   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:43.433516   69886 cri.go:89] found id: ""
	I0528 21:54:43.433525   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:43.433584   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.438209   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:43.438276   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:43.479257   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:43.479299   69886 cri.go:89] found id: ""
	I0528 21:54:43.479309   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:43.479425   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.484063   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:43.484127   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:43.523360   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:43.523384   69886 cri.go:89] found id: ""
	I0528 21:54:43.523394   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:43.523443   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.527859   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:43.527915   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:43.565610   69886 cri.go:89] found id: ""
	I0528 21:54:43.565631   69886 logs.go:276] 0 containers: []
	W0528 21:54:43.565638   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:43.565643   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:43.565687   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:43.603133   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:43.603155   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:43.603159   69886 cri.go:89] found id: ""
	I0528 21:54:43.603166   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:43.603233   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.607421   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.611570   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:43.611593   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:43.656455   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:43.656483   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:43.708385   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:43.708416   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:43.766267   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:43.766300   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:43.813734   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:43.813782   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:43.857289   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:43.857317   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:43.897976   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:43.898001   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:41.721973   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:44.798063   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:44.394070   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:44.394112   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:44.450041   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:44.450078   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:44.464067   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:44.464092   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:44.588402   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:44.588432   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:44.631477   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:44.631505   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:44.676531   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:44.676562   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:47.229026   69886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:54:47.247014   69886 api_server.go:72] duration metric: took 4m15.746572678s to wait for apiserver process to appear ...
	I0528 21:54:47.247043   69886 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:54:47.247085   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:47.247153   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:47.291560   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:47.291592   69886 cri.go:89] found id: ""
	I0528 21:54:47.291602   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:47.291667   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.296538   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:47.296597   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:47.335786   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:47.335809   69886 cri.go:89] found id: ""
	I0528 21:54:47.335817   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:47.335861   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.340222   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:47.340295   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:47.376487   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:47.376518   69886 cri.go:89] found id: ""
	I0528 21:54:47.376528   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:47.376587   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.380986   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:47.381043   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:47.419121   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:47.419144   69886 cri.go:89] found id: ""
	I0528 21:54:47.419151   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:47.419194   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.423323   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:47.423378   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:47.460781   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:47.460806   69886 cri.go:89] found id: ""
	I0528 21:54:47.460813   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:47.460856   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.465054   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:47.465107   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:47.510054   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:47.510077   69886 cri.go:89] found id: ""
	I0528 21:54:47.510085   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:47.510136   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.514707   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:47.514764   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:47.551564   69886 cri.go:89] found id: ""
	I0528 21:54:47.551587   69886 logs.go:276] 0 containers: []
	W0528 21:54:47.551594   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:47.551600   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:47.551647   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:47.591484   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:47.591506   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:47.591511   69886 cri.go:89] found id: ""
	I0528 21:54:47.591520   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:47.591581   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.596620   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.600861   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:47.600884   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:48.031181   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:48.031218   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:48.085321   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:48.085354   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:48.135504   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:48.135538   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:48.172440   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:48.172474   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:48.210817   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:48.210849   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:48.248170   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:48.248196   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:48.290905   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:48.290933   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:48.344302   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:48.344333   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:48.363912   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:48.363940   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:48.490794   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:48.490836   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:48.538412   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:48.538443   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:48.574693   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:48.574724   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:47.828383   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:47.828686   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:51.128492   69886 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0528 21:54:51.132736   69886 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0528 21:54:51.133908   69886 api_server.go:141] control plane version: v1.30.1
	I0528 21:54:51.133927   69886 api_server.go:131] duration metric: took 3.886877047s to wait for apiserver health ...
	I0528 21:54:51.133935   69886 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:54:51.133953   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:51.134009   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:51.174021   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:51.174042   69886 cri.go:89] found id: ""
	I0528 21:54:51.174049   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:51.174100   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.179416   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:51.179487   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:51.218954   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:51.218981   69886 cri.go:89] found id: ""
	I0528 21:54:51.218992   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:51.219055   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.224849   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:51.224920   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:51.265274   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:51.265304   69886 cri.go:89] found id: ""
	I0528 21:54:51.265314   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:51.265388   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.270027   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:51.270104   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:51.316234   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:51.316259   69886 cri.go:89] found id: ""
	I0528 21:54:51.316269   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:51.316324   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.320705   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:51.320771   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:51.358054   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:51.358079   69886 cri.go:89] found id: ""
	I0528 21:54:51.358089   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:51.358136   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.363687   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:51.363753   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:51.409441   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:51.409462   69886 cri.go:89] found id: ""
	I0528 21:54:51.409470   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:51.409517   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.414069   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:51.414125   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:51.454212   69886 cri.go:89] found id: ""
	I0528 21:54:51.454245   69886 logs.go:276] 0 containers: []
	W0528 21:54:51.454255   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:51.454263   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:51.454324   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:51.492146   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:51.492174   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:51.492181   69886 cri.go:89] found id: ""
	I0528 21:54:51.492190   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:51.492262   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.497116   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.501448   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:51.501469   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:51.871114   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:51.871151   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:51.918562   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:51.918590   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:52.031780   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:52.031819   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:52.090798   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:52.090827   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:52.131645   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:52.131673   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:52.191137   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:52.191172   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:52.241028   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:52.241054   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:52.276075   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:52.276115   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:52.328268   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:52.328307   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:52.342509   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:52.342542   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:52.390934   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:52.390980   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:52.429778   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:52.429809   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:54.975461   69886 system_pods.go:59] 8 kube-system pods found
	I0528 21:54:54.975495   69886 system_pods.go:61] "coredns-7db6d8ff4d-fmk2h" [a084dfb5-5818-4244-9052-a9f861b45617] Running
	I0528 21:54:54.975502   69886 system_pods.go:61] "etcd-no-preload-290122" [85b87ff0-50e4-4b23-b4dd-8442068b1af3] Running
	I0528 21:54:54.975508   69886 system_pods.go:61] "kube-apiserver-no-preload-290122" [2e2cecbc-1755-4cdc-8963-ab1e5b73e9f1] Running
	I0528 21:54:54.975514   69886 system_pods.go:61] "kube-controller-manager-no-preload-290122" [9a6218b4-fbd3-46ff-8eb5-a19f5ed9db72] Running
	I0528 21:54:54.975519   69886 system_pods.go:61] "kube-proxy-w45qh" [f962c73d-872d-4f78-a628-267cb0be49bb] Running
	I0528 21:54:54.975524   69886 system_pods.go:61] "kube-scheduler-no-preload-290122" [d191845d-d1ff-45d8-8601-b0395e2752f1] Running
	I0528 21:54:54.975532   69886 system_pods.go:61] "metrics-server-569cc877fc-j2khc" [2254e89c-3a61-4523-99a2-27ec92e73c9a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:54:54.975540   69886 system_pods.go:61] "storage-provisioner" [fc1a5463-05e0-4213-a7a8-2dd7f355ac36] Running
	I0528 21:54:54.975549   69886 system_pods.go:74] duration metric: took 3.841608486s to wait for pod list to return data ...
	I0528 21:54:54.975564   69886 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:54:54.977757   69886 default_sa.go:45] found service account: "default"
	I0528 21:54:54.977794   69886 default_sa.go:55] duration metric: took 2.222664ms for default service account to be created ...
	I0528 21:54:54.977803   69886 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:54:54.982505   69886 system_pods.go:86] 8 kube-system pods found
	I0528 21:54:54.982527   69886 system_pods.go:89] "coredns-7db6d8ff4d-fmk2h" [a084dfb5-5818-4244-9052-a9f861b45617] Running
	I0528 21:54:54.982532   69886 system_pods.go:89] "etcd-no-preload-290122" [85b87ff0-50e4-4b23-b4dd-8442068b1af3] Running
	I0528 21:54:54.982537   69886 system_pods.go:89] "kube-apiserver-no-preload-290122" [2e2cecbc-1755-4cdc-8963-ab1e5b73e9f1] Running
	I0528 21:54:54.982541   69886 system_pods.go:89] "kube-controller-manager-no-preload-290122" [9a6218b4-fbd3-46ff-8eb5-a19f5ed9db72] Running
	I0528 21:54:54.982545   69886 system_pods.go:89] "kube-proxy-w45qh" [f962c73d-872d-4f78-a628-267cb0be49bb] Running
	I0528 21:54:54.982549   69886 system_pods.go:89] "kube-scheduler-no-preload-290122" [d191845d-d1ff-45d8-8601-b0395e2752f1] Running
	I0528 21:54:54.982554   69886 system_pods.go:89] "metrics-server-569cc877fc-j2khc" [2254e89c-3a61-4523-99a2-27ec92e73c9a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:54:54.982559   69886 system_pods.go:89] "storage-provisioner" [fc1a5463-05e0-4213-a7a8-2dd7f355ac36] Running
	I0528 21:54:54.982565   69886 system_pods.go:126] duration metric: took 4.757682ms to wait for k8s-apps to be running ...
	I0528 21:54:54.982571   69886 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:54:54.982611   69886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:54:54.998318   69886 system_svc.go:56] duration metric: took 15.73926ms WaitForService to wait for kubelet
	I0528 21:54:54.998344   69886 kubeadm.go:576] duration metric: took 4m23.497907193s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:54:54.998364   69886 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:54:55.000709   69886 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:54:55.000726   69886 node_conditions.go:123] node cpu capacity is 2
	I0528 21:54:55.000737   69886 node_conditions.go:105] duration metric: took 2.368195ms to run NodePressure ...
	I0528 21:54:55.000747   69886 start.go:240] waiting for startup goroutines ...
	I0528 21:54:55.000754   69886 start.go:245] waiting for cluster config update ...
	I0528 21:54:55.000767   69886 start.go:254] writing updated cluster config ...
	I0528 21:54:55.001043   69886 ssh_runner.go:195] Run: rm -f paused
	I0528 21:54:55.049907   69886 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:54:55.051941   69886 out.go:177] * Done! kubectl is now configured to use "no-preload-290122" cluster and "default" namespace by default
	I0528 21:54:50.874003   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:53.946104   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:00.029992   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:03.098014   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:09.177976   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:12.250035   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:18.330105   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:21.402027   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:27.830110   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:55:27.830377   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:55:27.830409   70393 kubeadm.go:309] 
	I0528 21:55:27.830460   70393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:55:27.830496   70393 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:55:27.830504   70393 kubeadm.go:309] 
	I0528 21:55:27.830563   70393 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:55:27.830629   70393 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:55:27.830806   70393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:55:27.830833   70393 kubeadm.go:309] 
	I0528 21:55:27.830939   70393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:55:27.830970   70393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:55:27.830999   70393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:55:27.831006   70393 kubeadm.go:309] 
	I0528 21:55:27.831089   70393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:55:27.831161   70393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:55:27.831168   70393 kubeadm.go:309] 
	I0528 21:55:27.831276   70393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:55:27.831396   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:55:27.831491   70393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:55:27.831586   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:55:27.831597   70393 kubeadm.go:309] 
	I0528 21:55:27.832385   70393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:55:27.832478   70393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:55:27.832569   70393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0528 21:55:27.832707   70393 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0528 21:55:27.832768   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0528 21:55:28.286592   70393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:55:28.301095   70393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:55:28.310856   70393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:55:28.310875   70393 kubeadm.go:156] found existing configuration files:
	
	I0528 21:55:28.310916   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:55:28.319713   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:55:28.319757   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:55:28.328964   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:55:28.337404   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:55:28.337456   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:55:28.346480   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:55:28.355427   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:55:28.355475   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:55:28.364843   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:55:28.373821   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:55:28.373874   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:55:28.382542   70393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 21:55:28.448539   70393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0528 21:55:28.448744   70393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 21:55:28.592911   70393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 21:55:28.593029   70393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 21:55:28.593137   70393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 21:55:28.793805   70393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 21:55:28.795709   70393 out.go:204]   - Generating certificates and keys ...
	I0528 21:55:28.795786   70393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 21:55:28.795854   70393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 21:55:28.795959   70393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 21:55:28.796055   70393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0528 21:55:28.796153   70393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0528 21:55:28.796349   70393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0528 21:55:28.796467   70393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0528 21:55:28.796537   70393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0528 21:55:28.796610   70393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 21:55:28.796721   70393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 21:55:28.796768   70393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0528 21:55:28.796847   70393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 21:55:28.946885   70393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 21:55:29.128640   70393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 21:55:29.240490   70393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 21:55:29.542128   70393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 21:55:29.563784   70393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 21:55:29.565927   70393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 21:55:29.566159   70393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 21:55:29.711517   70393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 21:55:27.482003   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:30.554006   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:29.713311   70393 out.go:204]   - Booting up control plane ...
	I0528 21:55:29.713420   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 21:55:29.717970   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 21:55:29.718779   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 21:55:29.719429   70393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 21:55:29.722781   70393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0528 21:55:36.633958   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:39.710041   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:45.785968   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:48.861975   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:54.938007   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:58.014038   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:04.094039   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:07.162043   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:09.724902   70393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:56:09.725334   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:09.725557   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:13.241997   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:14.726408   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:14.726667   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:16.314032   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:22.394150   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:25.465982   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:24.727314   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:24.727592   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:31.546004   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:34.617980   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:40.697993   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:43.770044   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:44.728635   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:44.728954   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:49.853977   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:52.922083   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:59.001998   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:02.073983   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:08.157974   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:11.226001   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:17.305964   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:20.377963   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:24.729385   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:57:24.729659   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:57:24.729688   70393 kubeadm.go:309] 
	I0528 21:57:24.729745   70393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:57:24.729835   70393 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:57:24.729856   70393 kubeadm.go:309] 
	I0528 21:57:24.729898   70393 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:57:24.729930   70393 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:57:24.730023   70393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:57:24.730030   70393 kubeadm.go:309] 
	I0528 21:57:24.730156   70393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:57:24.730212   70393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:57:24.730267   70393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:57:24.730278   70393 kubeadm.go:309] 
	I0528 21:57:24.730403   70393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:57:24.730522   70393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:57:24.730533   70393 kubeadm.go:309] 
	I0528 21:57:24.730669   70393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:57:24.730788   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:57:24.730899   70393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:57:24.731020   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:57:24.731039   70393 kubeadm.go:309] 
	I0528 21:57:24.731657   70393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:57:24.731752   70393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:57:24.731861   70393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0528 21:57:24.731942   70393 kubeadm.go:393] duration metric: took 7m57.905523124s to StartCluster
	I0528 21:57:24.731997   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:57:24.732064   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:57:24.772889   70393 cri.go:89] found id: ""
	I0528 21:57:24.772916   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.772923   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:57:24.772929   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:57:24.772988   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:57:24.806418   70393 cri.go:89] found id: ""
	I0528 21:57:24.806447   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.806458   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:57:24.806467   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:57:24.806534   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:57:24.844994   70393 cri.go:89] found id: ""
	I0528 21:57:24.845020   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.845028   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:57:24.845035   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:57:24.845098   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:57:24.880517   70393 cri.go:89] found id: ""
	I0528 21:57:24.880547   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.880558   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:57:24.880566   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:57:24.880615   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:57:24.917534   70393 cri.go:89] found id: ""
	I0528 21:57:24.917561   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.917569   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:57:24.917575   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:57:24.917624   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:57:24.952898   70393 cri.go:89] found id: ""
	I0528 21:57:24.952929   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.952940   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:57:24.952948   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:57:24.953011   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:57:24.994957   70393 cri.go:89] found id: ""
	I0528 21:57:24.994983   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.994990   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:57:24.994996   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:57:24.995046   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:57:25.032594   70393 cri.go:89] found id: ""
	I0528 21:57:25.032617   70393 logs.go:276] 0 containers: []
	W0528 21:57:25.032624   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:57:25.032633   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:57:25.032645   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:57:25.112858   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:57:25.112882   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:57:25.112894   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:57:25.217748   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:57:25.217792   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:57:25.289998   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:57:25.290035   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:57:25.344833   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:57:25.344868   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0528 21:57:25.360547   70393 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0528 21:57:25.360594   70393 out.go:239] * 
	W0528 21:57:25.360659   70393 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:57:25.360693   70393 out.go:239] * 
	W0528 21:57:25.361545   70393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 21:57:25.365387   70393 out.go:177] 
	W0528 21:57:25.366681   70393 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:57:25.366731   70393 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0528 21:57:25.366772   70393 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0528 21:57:25.369011   70393 out.go:177] 
	
	
	==> CRI-O <==
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.299630292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933446299601886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a8e63ae-6149-45f8-9e26-c10041a39540 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.300118360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67d1b6e0-9608-41de-a738-5a8ada7cb304 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.300166206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67d1b6e0-9608-41de-a738-5a8ada7cb304 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.300251965Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=67d1b6e0-9608-41de-a738-5a8ada7cb304 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.330422160Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41dae308-bd3d-4d45-93a7-861f3a589b35 name=/runtime.v1.RuntimeService/Version
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.330493631Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41dae308-bd3d-4d45-93a7-861f3a589b35 name=/runtime.v1.RuntimeService/Version
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.331498131Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35ed6788-f0e8-404a-aa9f-17409c331538 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.331848212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933446331829922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35ed6788-f0e8-404a-aa9f-17409c331538 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.332356876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49836d91-e6f2-4567-9ca7-30bb332fe2c6 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.332406699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49836d91-e6f2-4567-9ca7-30bb332fe2c6 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.332441561Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=49836d91-e6f2-4567-9ca7-30bb332fe2c6 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.363083922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=213da366-e4d3-44b6-afef-d68c4be43103 name=/runtime.v1.RuntimeService/Version
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.363148968Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=213da366-e4d3-44b6-afef-d68c4be43103 name=/runtime.v1.RuntimeService/Version
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.364319631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce60c706-07f2-4543-bdf5-457672644c2d name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.364682719Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933446364662790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce60c706-07f2-4543-bdf5-457672644c2d name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.365344269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e8d2a27-417a-49af-9883-8ada626c1461 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.365417316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e8d2a27-417a-49af-9883-8ada626c1461 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.365463079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1e8d2a27-417a-49af-9883-8ada626c1461 name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.397134114Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed0b2fc5-237a-49a2-9793-0f5eb51326be name=/runtime.v1.RuntimeService/Version
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.397261409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed0b2fc5-237a-49a2-9793-0f5eb51326be name=/runtime.v1.RuntimeService/Version
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.398503237Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90b319f3-ec14-4c8c-8c6b-2214830053c3 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.398849681Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933446398830886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90b319f3-ec14-4c8c-8c6b-2214830053c3 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.399453805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a6e2156-eb0b-4344-a8f5-ebe368dc40bb name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.399515162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a6e2156-eb0b-4344-a8f5-ebe368dc40bb name=/runtime.v1.RuntimeService/ListContainers
	May 28 21:57:26 old-k8s-version-499466 crio[643]: time="2024-05-28 21:57:26.399547686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9a6e2156-eb0b-4344-a8f5-ebe368dc40bb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May28 21:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.059723] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041122] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.612680] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.319990] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.591576] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.302597] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.059124] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058807] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.173273] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.170028] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.245355] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +6.602395] systemd-fstab-generator[831]: Ignoring "noauto" option for root device
	[  +0.061119] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.883111] systemd-fstab-generator[957]: Ignoring "noauto" option for root device
	[ +13.815764] kauditd_printk_skb: 46 callbacks suppressed
	[May28 21:53] systemd-fstab-generator[5029]: Ignoring "noauto" option for root device
	[May28 21:55] systemd-fstab-generator[5306]: Ignoring "noauto" option for root device
	[  +0.062272] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:57:26 up 8 min,  0 users,  load average: 0.14, 0.22, 0.12
	Linux old-k8s-version-499466 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc0009b6240)
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]: goroutine 147 [select]:
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009c1ef0, 0x4f0ac20, 0xc00072e370, 0x1, 0xc00009e0c0)
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000266700, 0xc00009e0c0)
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000968770, 0xc000b79920)
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	May 28 21:57:24 old-k8s-version-499466 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	May 28 21:57:24 old-k8s-version-499466 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	May 28 21:57:24 old-k8s-version-499466 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	May 28 21:57:25 old-k8s-version-499466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	May 28 21:57:25 old-k8s-version-499466 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 28 21:57:25 old-k8s-version-499466 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 28 21:57:25 old-k8s-version-499466 kubelet[5544]: I0528 21:57:25.272054    5544 server.go:416] Version: v1.20.0
	May 28 21:57:25 old-k8s-version-499466 kubelet[5544]: I0528 21:57:25.273446    5544 server.go:837] Client rotation is on, will bootstrap in background
	May 28 21:57:25 old-k8s-version-499466 kubelet[5544]: I0528 21:57:25.278960    5544 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 28 21:57:25 old-k8s-version-499466 kubelet[5544]: W0528 21:57:25.282384    5544 manager.go:159] Cannot detect current cgroup on cgroup v2
	May 28 21:57:25 old-k8s-version-499466 kubelet[5544]: I0528 21:57:25.282688    5544 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-499466 -n old-k8s-version-499466
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-499466 -n old-k8s-version-499466: exit status 2 (221.444545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-499466" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (737.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-249165 --alsologtostderr -v=3
E0528 21:51:36.131469   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:51:55.337169   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:52:37.451450   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-249165 --alsologtostderr -v=3: exit status 82 (2m0.518639946s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-249165"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:51:09.228675   72537 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:51:09.229663   72537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:51:09.229679   72537 out.go:304] Setting ErrFile to fd 2...
	I0528 21:51:09.229686   72537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:51:09.230228   72537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:51:09.231261   72537 out.go:298] Setting JSON to false
	I0528 21:51:09.231373   72537 mustload.go:65] Loading cluster: default-k8s-diff-port-249165
	I0528 21:51:09.231835   72537 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:51:09.231946   72537 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/config.json ...
	I0528 21:51:09.232219   72537 mustload.go:65] Loading cluster: default-k8s-diff-port-249165
	I0528 21:51:09.232383   72537 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:51:09.232418   72537 stop.go:39] StopHost: default-k8s-diff-port-249165
	I0528 21:51:09.232964   72537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:51:09.233029   72537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:51:09.250369   72537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0528 21:51:09.250936   72537 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:51:09.251612   72537 main.go:141] libmachine: Using API Version  1
	I0528 21:51:09.251643   72537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:51:09.252031   72537 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:51:09.254341   72537 out.go:177] * Stopping node "default-k8s-diff-port-249165"  ...
	I0528 21:51:09.255679   72537 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0528 21:51:09.255705   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:51:09.255939   72537 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0528 21:51:09.255962   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:51:09.259006   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:51:09.259408   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:51:09.259442   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:51:09.259657   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:51:09.259832   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:51:09.259963   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:51:09.260151   72537 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:51:09.393048   72537 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0528 21:51:09.437386   72537 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0528 21:51:09.499079   72537 main.go:141] libmachine: Stopping "default-k8s-diff-port-249165"...
	I0528 21:51:09.499106   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 21:51:09.500871   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Stop
	I0528 21:51:09.504839   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 0/120
	I0528 21:51:10.506250   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 1/120
	I0528 21:51:11.507585   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 2/120
	I0528 21:51:12.508910   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 3/120
	I0528 21:51:13.510285   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 4/120
	I0528 21:51:14.512308   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 5/120
	I0528 21:51:15.513666   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 6/120
	I0528 21:51:16.515151   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 7/120
	I0528 21:51:17.516444   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 8/120
	I0528 21:51:18.518143   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 9/120
	I0528 21:51:19.520129   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 10/120
	I0528 21:51:20.521485   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 11/120
	I0528 21:51:21.523103   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 12/120
	I0528 21:51:22.524587   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 13/120
	I0528 21:51:23.525935   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 14/120
	I0528 21:51:24.527711   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 15/120
	I0528 21:51:25.529012   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 16/120
	I0528 21:51:26.530541   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 17/120
	I0528 21:51:27.532273   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 18/120
	I0528 21:51:28.534320   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 19/120
	I0528 21:51:29.536207   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 20/120
	I0528 21:51:30.537343   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 21/120
	I0528 21:51:31.538537   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 22/120
	I0528 21:51:32.539962   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 23/120
	I0528 21:51:33.541200   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 24/120
	I0528 21:51:34.542719   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 25/120
	I0528 21:51:35.544811   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 26/120
	I0528 21:51:36.546181   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 27/120
	I0528 21:51:37.548220   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 28/120
	I0528 21:51:38.549565   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 29/120
	I0528 21:51:39.551494   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 30/120
	I0528 21:51:40.553224   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 31/120
	I0528 21:51:41.554653   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 32/120
	I0528 21:51:42.556859   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 33/120
	I0528 21:51:43.559010   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 34/120
	I0528 21:51:44.560364   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 35/120
	I0528 21:51:45.561700   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 36/120
	I0528 21:51:46.563005   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 37/120
	I0528 21:51:47.564301   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 38/120
	I0528 21:51:48.565659   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 39/120
	I0528 21:51:49.567593   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 40/120
	I0528 21:51:50.568803   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 41/120
	I0528 21:51:51.570291   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 42/120
	I0528 21:51:52.572263   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 43/120
	I0528 21:51:53.573560   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 44/120
	I0528 21:51:54.575604   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 45/120
	I0528 21:51:55.577729   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 46/120
	I0528 21:51:56.579589   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 47/120
	I0528 21:51:57.581299   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 48/120
	I0528 21:51:58.582718   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 49/120
	I0528 21:51:59.584788   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 50/120
	I0528 21:52:00.586183   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 51/120
	I0528 21:52:01.588245   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 52/120
	I0528 21:52:02.589569   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 53/120
	I0528 21:52:03.590802   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 54/120
	I0528 21:52:04.592436   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 55/120
	I0528 21:52:05.593835   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 56/120
	I0528 21:52:06.595089   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 57/120
	I0528 21:52:07.597313   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 58/120
	I0528 21:52:08.598669   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 59/120
	I0528 21:52:09.600908   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 60/120
	I0528 21:52:10.602492   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 61/120
	I0528 21:52:11.604234   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 62/120
	I0528 21:52:12.605547   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 63/120
	I0528 21:52:13.607402   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 64/120
	I0528 21:52:14.609230   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 65/120
	I0528 21:52:15.610557   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 66/120
	I0528 21:52:16.612428   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 67/120
	I0528 21:52:17.613828   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 68/120
	I0528 21:52:18.615090   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 69/120
	I0528 21:52:19.617238   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 70/120
	I0528 21:52:20.618732   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 71/120
	I0528 21:52:21.620072   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 72/120
	I0528 21:52:22.621556   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 73/120
	I0528 21:52:23.622820   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 74/120
	I0528 21:52:24.624618   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 75/120
	I0528 21:52:25.626075   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 76/120
	I0528 21:52:26.627495   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 77/120
	I0528 21:52:27.629393   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 78/120
	I0528 21:52:28.630843   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 79/120
	I0528 21:52:29.632649   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 80/120
	I0528 21:52:30.633965   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 81/120
	I0528 21:52:31.635511   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 82/120
	I0528 21:52:32.636866   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 83/120
	I0528 21:52:33.638146   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 84/120
	I0528 21:52:34.639914   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 85/120
	I0528 21:52:35.641240   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 86/120
	I0528 21:52:36.642624   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 87/120
	I0528 21:52:37.644355   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 88/120
	I0528 21:52:38.645801   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 89/120
	I0528 21:52:39.647699   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 90/120
	I0528 21:52:40.649095   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 91/120
	I0528 21:52:41.650384   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 92/120
	I0528 21:52:42.651761   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 93/120
	I0528 21:52:43.653126   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 94/120
	I0528 21:52:44.654661   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 95/120
	I0528 21:52:45.656147   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 96/120
	I0528 21:52:46.657823   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 97/120
	I0528 21:52:47.659200   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 98/120
	I0528 21:52:48.660563   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 99/120
	I0528 21:52:49.662326   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 100/120
	I0528 21:52:50.664331   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 101/120
	I0528 21:52:51.665589   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 102/120
	I0528 21:52:52.666866   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 103/120
	I0528 21:52:53.668989   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 104/120
	I0528 21:52:54.670828   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 105/120
	I0528 21:52:55.672116   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 106/120
	I0528 21:52:56.673458   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 107/120
	I0528 21:52:57.674832   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 108/120
	I0528 21:52:58.676095   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 109/120
	I0528 21:52:59.678460   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 110/120
	I0528 21:53:00.680280   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 111/120
	I0528 21:53:01.681622   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 112/120
	I0528 21:53:02.683469   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 113/120
	I0528 21:53:03.684894   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 114/120
	I0528 21:53:04.686765   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 115/120
	I0528 21:53:05.688268   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 116/120
	I0528 21:53:06.689785   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 117/120
	I0528 21:53:07.690936   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 118/120
	I0528 21:53:08.693252   72537 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for machine to stop 119/120
	I0528 21:53:09.693823   72537 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0528 21:53:09.693902   72537 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0528 21:53:09.695592   72537 out.go:177] 
	W0528 21:53:09.696836   72537 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0528 21:53:09.696856   72537 out.go:239] * 
	* 
	W0528 21:53:09.699337   72537 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 21:53:09.700530   72537 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-249165 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-249165 -n default-k8s-diff-port-249165
E0528 21:53:20.453156   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-249165 -n default-k8s-diff-port-249165: exit status 3 (18.615425398s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:53:28.318085   72981 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host
	E0528 21:53:28.318107   72981 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-249165" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-249165 -n default-k8s-diff-port-249165
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-249165 -n default-k8s-diff-port-249165: exit status 3 (3.164639375s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:53:31.482024   73076 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host
	E0528 21:53:31.482044   73076 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-249165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-249165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15157878s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-249165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-249165 -n default-k8s-diff-port-249165
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-249165 -n default-k8s-diff-port-249165: exit status 3 (3.067441005s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:53:40.702087   73158 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host
	E0528 21:53:40.702106   73158 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.48:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-249165" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0528 21:54:04.182292   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:54:25.051864   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:54:32.641676   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:54:42.597648   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 21:54:45.763458   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-595279 -n embed-certs-595279
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-05-28 22:02:42.415101164 +0000 UTC m=+6093.171179779
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-595279 -n embed-certs-595279
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-595279 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-595279 logs -n 25: (1.259070402s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-290122             | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-595279            | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-499466        | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-290122                  | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-595279                 | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-499466             | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-257793                              | cert-expiration-257793       | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807140 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	|         | disable-driver-mounts-807140                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:50 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-249165  | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC | 28 May 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC |                     |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-249165       | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC |                     |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:53:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:53:40.744358   73188 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:53:40.744653   73188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:53:40.744664   73188 out.go:304] Setting ErrFile to fd 2...
	I0528 21:53:40.744668   73188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:53:40.744923   73188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:53:40.745490   73188 out.go:298] Setting JSON to false
	I0528 21:53:40.746663   73188 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5764,"bootTime":1716927457,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:53:40.746723   73188 start.go:139] virtualization: kvm guest
	I0528 21:53:40.749013   73188 out.go:177] * [default-k8s-diff-port-249165] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:53:40.750611   73188 notify.go:220] Checking for updates...
	I0528 21:53:40.750618   73188 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:53:40.752116   73188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:53:40.753384   73188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:53:40.754612   73188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:53:40.755846   73188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:53:40.756972   73188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:53:40.758627   73188 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:53:40.759050   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.759106   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.774337   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I0528 21:53:40.774754   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.775318   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.775344   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.775633   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.775791   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.776007   73188 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:53:40.776327   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.776382   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.790531   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I0528 21:53:40.790970   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.791471   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.791498   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.791802   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.791983   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.826633   73188 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:53:40.827847   73188 start.go:297] selected driver: kvm2
	I0528 21:53:40.827863   73188 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:53:40.827981   73188 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:53:40.828705   73188 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:53:40.828777   73188 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:53:40.844223   73188 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:53:40.844574   73188 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:53:40.844638   73188 cni.go:84] Creating CNI manager for ""
	I0528 21:53:40.844650   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:53:40.844682   73188 start.go:340] cluster config:
	{Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:53:40.844775   73188 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:53:40.846544   73188 out.go:177] * Starting "default-k8s-diff-port-249165" primary control-plane node in "default-k8s-diff-port-249165" cluster
	I0528 21:53:40.847754   73188 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:53:40.847792   73188 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 21:53:40.847801   73188 cache.go:56] Caching tarball of preloaded images
	I0528 21:53:40.847870   73188 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:53:40.847880   73188 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 21:53:40.847964   73188 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/config.json ...
	I0528 21:53:40.848196   73188 start.go:360] acquireMachinesLock for default-k8s-diff-port-249165: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:53:40.848256   73188 start.go:364] duration metric: took 38.994µs to acquireMachinesLock for "default-k8s-diff-port-249165"
	I0528 21:53:40.848271   73188 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:53:40.848281   73188 fix.go:54] fixHost starting: 
	I0528 21:53:40.848534   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.848571   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.863227   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0528 21:53:40.863708   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.864162   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.864182   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.864616   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.864794   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.864952   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 21:53:40.866583   73188 fix.go:112] recreateIfNeeded on default-k8s-diff-port-249165: state=Running err=<nil>
	W0528 21:53:40.866600   73188 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:53:40.868382   73188 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-249165" VM ...
	I0528 21:53:38.450836   70002 logs.go:123] Gathering logs for storage-provisioner [9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d] ...
	I0528 21:53:38.450866   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d"
	I0528 21:53:38.485575   70002 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:38.485610   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:38.854290   70002 logs.go:123] Gathering logs for container status ...
	I0528 21:53:38.854325   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:38.902357   70002 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:38.902389   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:38.916785   70002 logs.go:123] Gathering logs for etcd [3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c] ...
	I0528 21:53:38.916820   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c"
	I0528 21:53:38.982119   70002 logs.go:123] Gathering logs for kube-apiserver [056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622] ...
	I0528 21:53:38.982148   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622"
	I0528 21:53:39.031038   70002 logs.go:123] Gathering logs for kube-proxy [cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc] ...
	I0528 21:53:39.031066   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc"
	I0528 21:53:39.068094   70002 logs.go:123] Gathering logs for kube-controller-manager [b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89] ...
	I0528 21:53:39.068123   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89"
	I0528 21:53:39.129214   70002 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:39.129248   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:39.191483   70002 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:39.191523   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:53:41.813698   70002 system_pods.go:59] 8 kube-system pods found
	I0528 21:53:41.813725   70002 system_pods.go:61] "coredns-7db6d8ff4d-8cb7b" [b3908d89-cfc6-4f1a-9aef-861aac0d3e29] Running
	I0528 21:53:41.813730   70002 system_pods.go:61] "etcd-embed-certs-595279" [58581274-a239-4367-926e-c333f201d4f8] Running
	I0528 21:53:41.813733   70002 system_pods.go:61] "kube-apiserver-embed-certs-595279" [cc2dd164-709f-4e59-81bc-ce9d30bbced9] Running
	I0528 21:53:41.813736   70002 system_pods.go:61] "kube-controller-manager-embed-certs-595279" [e049af67-fff8-466f-96ff-81d148602884] Running
	I0528 21:53:41.813739   70002 system_pods.go:61] "kube-proxy-pnl5w" [9c2c68bc-42c2-425e-ae35-a8c07b5d5221] Running
	I0528 21:53:41.813742   70002 system_pods.go:61] "kube-scheduler-embed-certs-595279" [ab6bff2a-7266-4c5c-96bc-c87ef15dd342] Running
	I0528 21:53:41.813748   70002 system_pods.go:61] "metrics-server-569cc877fc-f6fz2" [b5e432cd-3b95-4f20-b9b3-c498512a7564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:53:41.813751   70002 system_pods.go:61] "storage-provisioner" [7bf52279-1fbc-40e5-8376-992c545c55dd] Running
	I0528 21:53:41.813771   70002 system_pods.go:74] duration metric: took 3.894565784s to wait for pod list to return data ...
	I0528 21:53:41.813780   70002 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:53:41.816297   70002 default_sa.go:45] found service account: "default"
	I0528 21:53:41.816319   70002 default_sa.go:55] duration metric: took 2.532587ms for default service account to be created ...
	I0528 21:53:41.816326   70002 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:53:41.821407   70002 system_pods.go:86] 8 kube-system pods found
	I0528 21:53:41.821437   70002 system_pods.go:89] "coredns-7db6d8ff4d-8cb7b" [b3908d89-cfc6-4f1a-9aef-861aac0d3e29] Running
	I0528 21:53:41.821447   70002 system_pods.go:89] "etcd-embed-certs-595279" [58581274-a239-4367-926e-c333f201d4f8] Running
	I0528 21:53:41.821453   70002 system_pods.go:89] "kube-apiserver-embed-certs-595279" [cc2dd164-709f-4e59-81bc-ce9d30bbced9] Running
	I0528 21:53:41.821458   70002 system_pods.go:89] "kube-controller-manager-embed-certs-595279" [e049af67-fff8-466f-96ff-81d148602884] Running
	I0528 21:53:41.821461   70002 system_pods.go:89] "kube-proxy-pnl5w" [9c2c68bc-42c2-425e-ae35-a8c07b5d5221] Running
	I0528 21:53:41.821465   70002 system_pods.go:89] "kube-scheduler-embed-certs-595279" [ab6bff2a-7266-4c5c-96bc-c87ef15dd342] Running
	I0528 21:53:41.821472   70002 system_pods.go:89] "metrics-server-569cc877fc-f6fz2" [b5e432cd-3b95-4f20-b9b3-c498512a7564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:53:41.821480   70002 system_pods.go:89] "storage-provisioner" [7bf52279-1fbc-40e5-8376-992c545c55dd] Running
	I0528 21:53:41.821489   70002 system_pods.go:126] duration metric: took 5.157831ms to wait for k8s-apps to be running ...
	I0528 21:53:41.821498   70002 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:53:41.821538   70002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:53:41.838819   70002 system_svc.go:56] duration metric: took 17.315204ms WaitForService to wait for kubelet
	I0528 21:53:41.838844   70002 kubeadm.go:576] duration metric: took 4m26.419891509s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:53:41.838864   70002 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:53:41.841408   70002 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:53:41.841424   70002 node_conditions.go:123] node cpu capacity is 2
	I0528 21:53:41.841433   70002 node_conditions.go:105] duration metric: took 2.56566ms to run NodePressure ...
	I0528 21:53:41.841445   70002 start.go:240] waiting for startup goroutines ...
	I0528 21:53:41.841452   70002 start.go:245] waiting for cluster config update ...
	I0528 21:53:41.841463   70002 start.go:254] writing updated cluster config ...
	I0528 21:53:41.841709   70002 ssh_runner.go:195] Run: rm -f paused
	I0528 21:53:41.886820   70002 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:53:41.888710   70002 out.go:177] * Done! kubectl is now configured to use "embed-certs-595279" cluster and "default" namespace by default
	I0528 21:53:40.749506   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:43.248909   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:40.869524   73188 machine.go:94] provisionDockerMachine start ...
	I0528 21:53:40.869542   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.869730   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:53:40.872099   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:53:40.872470   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:53:40.872491   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:53:40.872625   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:53:40.872772   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:53:40.872952   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:53:40.873092   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:53:40.873253   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:53:40.873429   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:53:40.873438   73188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:53:43.770029   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:45.748750   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:48.248904   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:46.841982   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:50.249442   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:52.749680   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:52.922023   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:55.251148   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:57.748960   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:55.994071   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:59.749114   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:02.248306   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:02.074025   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:05.145996   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:04.248616   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:06.749284   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:09.247806   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:11.249481   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:13.748196   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:12.825536   70393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:54:12.825810   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:12.826159   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:14.266167   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:15.749468   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:18.248675   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:17.826706   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:17.826945   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:17.338025   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:20.248941   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:22.749284   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:23.417971   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:25.248681   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:27.748556   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:27.827370   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:27.827610   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:26.490049   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:29.748865   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:32.248746   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:32.569987   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:35.641969   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:34.249483   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:36.748835   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:38.749264   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:41.251039   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:43.248816   69886 pod_ready.go:81] duration metric: took 4m0.006582939s for pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace to be "Ready" ...
	E0528 21:54:43.248839   69886 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0528 21:54:43.248847   69886 pod_ready.go:38] duration metric: took 4m4.041932949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:54:43.248863   69886 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:54:43.248889   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:43.248933   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:43.296609   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:43.296630   69886 cri.go:89] found id: ""
	I0528 21:54:43.296638   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:43.296694   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.301171   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:43.301211   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:43.340772   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:43.340793   69886 cri.go:89] found id: ""
	I0528 21:54:43.340799   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:43.340843   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.345422   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:43.345489   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:43.392432   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:43.392458   69886 cri.go:89] found id: ""
	I0528 21:54:43.392467   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:43.392521   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.396870   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:43.396943   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:43.433491   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:43.433516   69886 cri.go:89] found id: ""
	I0528 21:54:43.433525   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:43.433584   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.438209   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:43.438276   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:43.479257   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:43.479299   69886 cri.go:89] found id: ""
	I0528 21:54:43.479309   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:43.479425   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.484063   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:43.484127   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:43.523360   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:43.523384   69886 cri.go:89] found id: ""
	I0528 21:54:43.523394   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:43.523443   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.527859   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:43.527915   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:43.565610   69886 cri.go:89] found id: ""
	I0528 21:54:43.565631   69886 logs.go:276] 0 containers: []
	W0528 21:54:43.565638   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:43.565643   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:43.565687   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:43.603133   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:43.603155   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:43.603159   69886 cri.go:89] found id: ""
	I0528 21:54:43.603166   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:43.603233   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.607421   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.611570   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:43.611593   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:43.656455   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:43.656483   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:43.708385   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:43.708416   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:43.766267   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:43.766300   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:43.813734   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:43.813782   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:43.857289   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:43.857317   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:43.897976   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:43.898001   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:41.721973   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:44.798063   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:44.394070   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:44.394112   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:44.450041   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:44.450078   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:44.464067   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:44.464092   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:44.588402   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:44.588432   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:44.631477   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:44.631505   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:44.676531   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:44.676562   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:47.229026   69886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:54:47.247014   69886 api_server.go:72] duration metric: took 4m15.746572678s to wait for apiserver process to appear ...
	I0528 21:54:47.247043   69886 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:54:47.247085   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:47.247153   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:47.291560   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:47.291592   69886 cri.go:89] found id: ""
	I0528 21:54:47.291602   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:47.291667   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.296538   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:47.296597   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:47.335786   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:47.335809   69886 cri.go:89] found id: ""
	I0528 21:54:47.335817   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:47.335861   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.340222   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:47.340295   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:47.376487   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:47.376518   69886 cri.go:89] found id: ""
	I0528 21:54:47.376528   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:47.376587   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.380986   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:47.381043   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:47.419121   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:47.419144   69886 cri.go:89] found id: ""
	I0528 21:54:47.419151   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:47.419194   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.423323   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:47.423378   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:47.460781   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:47.460806   69886 cri.go:89] found id: ""
	I0528 21:54:47.460813   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:47.460856   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.465054   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:47.465107   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:47.510054   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:47.510077   69886 cri.go:89] found id: ""
	I0528 21:54:47.510085   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:47.510136   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.514707   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:47.514764   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:47.551564   69886 cri.go:89] found id: ""
	I0528 21:54:47.551587   69886 logs.go:276] 0 containers: []
	W0528 21:54:47.551594   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:47.551600   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:47.551647   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:47.591484   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:47.591506   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:47.591511   69886 cri.go:89] found id: ""
	I0528 21:54:47.591520   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:47.591581   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.596620   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.600861   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:47.600884   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:48.031181   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:48.031218   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:48.085321   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:48.085354   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:48.135504   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:48.135538   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:48.172440   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:48.172474   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:48.210817   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:48.210849   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:48.248170   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:48.248196   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:48.290905   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:48.290933   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:48.344302   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:48.344333   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:48.363912   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:48.363940   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:48.490794   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:48.490836   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:48.538412   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:48.538443   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:48.574693   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:48.574724   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:47.828383   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:47.828686   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:51.128492   69886 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0528 21:54:51.132736   69886 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0528 21:54:51.133908   69886 api_server.go:141] control plane version: v1.30.1
	I0528 21:54:51.133927   69886 api_server.go:131] duration metric: took 3.886877047s to wait for apiserver health ...
	I0528 21:54:51.133935   69886 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:54:51.133953   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:51.134009   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:51.174021   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:51.174042   69886 cri.go:89] found id: ""
	I0528 21:54:51.174049   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:51.174100   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.179416   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:51.179487   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:51.218954   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:51.218981   69886 cri.go:89] found id: ""
	I0528 21:54:51.218992   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:51.219055   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.224849   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:51.224920   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:51.265274   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:51.265304   69886 cri.go:89] found id: ""
	I0528 21:54:51.265314   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:51.265388   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.270027   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:51.270104   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:51.316234   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:51.316259   69886 cri.go:89] found id: ""
	I0528 21:54:51.316269   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:51.316324   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.320705   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:51.320771   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:51.358054   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:51.358079   69886 cri.go:89] found id: ""
	I0528 21:54:51.358089   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:51.358136   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.363687   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:51.363753   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:51.409441   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:51.409462   69886 cri.go:89] found id: ""
	I0528 21:54:51.409470   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:51.409517   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.414069   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:51.414125   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:51.454212   69886 cri.go:89] found id: ""
	I0528 21:54:51.454245   69886 logs.go:276] 0 containers: []
	W0528 21:54:51.454255   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:51.454263   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:51.454324   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:51.492146   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:51.492174   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:51.492181   69886 cri.go:89] found id: ""
	I0528 21:54:51.492190   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:51.492262   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.497116   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.501448   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:51.501469   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:51.871114   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:51.871151   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:51.918562   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:51.918590   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:52.031780   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:52.031819   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:52.090798   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:52.090827   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:52.131645   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:52.131673   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:52.191137   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:52.191172   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:52.241028   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:52.241054   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:52.276075   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:52.276115   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:52.328268   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:52.328307   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:52.342509   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:52.342542   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:52.390934   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:52.390980   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:52.429778   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:52.429809   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:54.975461   69886 system_pods.go:59] 8 kube-system pods found
	I0528 21:54:54.975495   69886 system_pods.go:61] "coredns-7db6d8ff4d-fmk2h" [a084dfb5-5818-4244-9052-a9f861b45617] Running
	I0528 21:54:54.975502   69886 system_pods.go:61] "etcd-no-preload-290122" [85b87ff0-50e4-4b23-b4dd-8442068b1af3] Running
	I0528 21:54:54.975508   69886 system_pods.go:61] "kube-apiserver-no-preload-290122" [2e2cecbc-1755-4cdc-8963-ab1e5b73e9f1] Running
	I0528 21:54:54.975514   69886 system_pods.go:61] "kube-controller-manager-no-preload-290122" [9a6218b4-fbd3-46ff-8eb5-a19f5ed9db72] Running
	I0528 21:54:54.975519   69886 system_pods.go:61] "kube-proxy-w45qh" [f962c73d-872d-4f78-a628-267cb0be49bb] Running
	I0528 21:54:54.975524   69886 system_pods.go:61] "kube-scheduler-no-preload-290122" [d191845d-d1ff-45d8-8601-b0395e2752f1] Running
	I0528 21:54:54.975532   69886 system_pods.go:61] "metrics-server-569cc877fc-j2khc" [2254e89c-3a61-4523-99a2-27ec92e73c9a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:54:54.975540   69886 system_pods.go:61] "storage-provisioner" [fc1a5463-05e0-4213-a7a8-2dd7f355ac36] Running
	I0528 21:54:54.975549   69886 system_pods.go:74] duration metric: took 3.841608486s to wait for pod list to return data ...
	I0528 21:54:54.975564   69886 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:54:54.977757   69886 default_sa.go:45] found service account: "default"
	I0528 21:54:54.977794   69886 default_sa.go:55] duration metric: took 2.222664ms for default service account to be created ...
	I0528 21:54:54.977803   69886 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:54:54.982505   69886 system_pods.go:86] 8 kube-system pods found
	I0528 21:54:54.982527   69886 system_pods.go:89] "coredns-7db6d8ff4d-fmk2h" [a084dfb5-5818-4244-9052-a9f861b45617] Running
	I0528 21:54:54.982532   69886 system_pods.go:89] "etcd-no-preload-290122" [85b87ff0-50e4-4b23-b4dd-8442068b1af3] Running
	I0528 21:54:54.982537   69886 system_pods.go:89] "kube-apiserver-no-preload-290122" [2e2cecbc-1755-4cdc-8963-ab1e5b73e9f1] Running
	I0528 21:54:54.982541   69886 system_pods.go:89] "kube-controller-manager-no-preload-290122" [9a6218b4-fbd3-46ff-8eb5-a19f5ed9db72] Running
	I0528 21:54:54.982545   69886 system_pods.go:89] "kube-proxy-w45qh" [f962c73d-872d-4f78-a628-267cb0be49bb] Running
	I0528 21:54:54.982549   69886 system_pods.go:89] "kube-scheduler-no-preload-290122" [d191845d-d1ff-45d8-8601-b0395e2752f1] Running
	I0528 21:54:54.982554   69886 system_pods.go:89] "metrics-server-569cc877fc-j2khc" [2254e89c-3a61-4523-99a2-27ec92e73c9a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:54:54.982559   69886 system_pods.go:89] "storage-provisioner" [fc1a5463-05e0-4213-a7a8-2dd7f355ac36] Running
	I0528 21:54:54.982565   69886 system_pods.go:126] duration metric: took 4.757682ms to wait for k8s-apps to be running ...
	I0528 21:54:54.982571   69886 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:54:54.982611   69886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:54:54.998318   69886 system_svc.go:56] duration metric: took 15.73926ms WaitForService to wait for kubelet
	I0528 21:54:54.998344   69886 kubeadm.go:576] duration metric: took 4m23.497907193s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:54:54.998364   69886 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:54:55.000709   69886 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:54:55.000726   69886 node_conditions.go:123] node cpu capacity is 2
	I0528 21:54:55.000737   69886 node_conditions.go:105] duration metric: took 2.368195ms to run NodePressure ...
	I0528 21:54:55.000747   69886 start.go:240] waiting for startup goroutines ...
	I0528 21:54:55.000754   69886 start.go:245] waiting for cluster config update ...
	I0528 21:54:55.000767   69886 start.go:254] writing updated cluster config ...
	I0528 21:54:55.001043   69886 ssh_runner.go:195] Run: rm -f paused
	I0528 21:54:55.049907   69886 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:54:55.051941   69886 out.go:177] * Done! kubectl is now configured to use "no-preload-290122" cluster and "default" namespace by default
	I0528 21:54:50.874003   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:53.946104   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:00.029992   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:03.098014   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:09.177976   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:12.250035   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:18.330105   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:21.402027   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:27.830110   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:55:27.830377   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:55:27.830409   70393 kubeadm.go:309] 
	I0528 21:55:27.830460   70393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:55:27.830496   70393 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:55:27.830504   70393 kubeadm.go:309] 
	I0528 21:55:27.830563   70393 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:55:27.830629   70393 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:55:27.830806   70393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:55:27.830833   70393 kubeadm.go:309] 
	I0528 21:55:27.830939   70393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:55:27.830970   70393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:55:27.830999   70393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:55:27.831006   70393 kubeadm.go:309] 
	I0528 21:55:27.831089   70393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:55:27.831161   70393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:55:27.831168   70393 kubeadm.go:309] 
	I0528 21:55:27.831276   70393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:55:27.831396   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:55:27.831491   70393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:55:27.831586   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:55:27.831597   70393 kubeadm.go:309] 
	I0528 21:55:27.832385   70393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:55:27.832478   70393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:55:27.832569   70393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0528 21:55:27.832707   70393 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0528 21:55:27.832768   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0528 21:55:28.286592   70393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:55:28.301095   70393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:55:28.310856   70393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:55:28.310875   70393 kubeadm.go:156] found existing configuration files:
	
	I0528 21:55:28.310916   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:55:28.319713   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:55:28.319757   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:55:28.328964   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:55:28.337404   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:55:28.337456   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:55:28.346480   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:55:28.355427   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:55:28.355475   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:55:28.364843   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:55:28.373821   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:55:28.373874   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:55:28.382542   70393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 21:55:28.448539   70393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0528 21:55:28.448744   70393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 21:55:28.592911   70393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 21:55:28.593029   70393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 21:55:28.593137   70393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 21:55:28.793805   70393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 21:55:28.795709   70393 out.go:204]   - Generating certificates and keys ...
	I0528 21:55:28.795786   70393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 21:55:28.795854   70393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 21:55:28.795959   70393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 21:55:28.796055   70393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0528 21:55:28.796153   70393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0528 21:55:28.796349   70393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0528 21:55:28.796467   70393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0528 21:55:28.796537   70393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0528 21:55:28.796610   70393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 21:55:28.796721   70393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 21:55:28.796768   70393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0528 21:55:28.796847   70393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 21:55:28.946885   70393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 21:55:29.128640   70393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 21:55:29.240490   70393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 21:55:29.542128   70393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 21:55:29.563784   70393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 21:55:29.565927   70393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 21:55:29.566159   70393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 21:55:29.711517   70393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 21:55:27.482003   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:30.554006   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:29.713311   70393 out.go:204]   - Booting up control plane ...
	I0528 21:55:29.713420   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 21:55:29.717970   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 21:55:29.718779   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 21:55:29.719429   70393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 21:55:29.722781   70393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0528 21:55:36.633958   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:39.710041   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:45.785968   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:48.861975   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:54.938007   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:58.014038   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:04.094039   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:07.162043   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:09.724902   70393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:56:09.725334   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:09.725557   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:13.241997   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:14.726408   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:14.726667   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:16.314032   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:22.394150   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:25.465982   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:24.727314   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:24.727592   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:31.546004   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:34.617980   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:40.697993   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:43.770044   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:44.728635   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:44.728954   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:49.853977   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:52.922083   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:59.001998   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:02.073983   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:08.157974   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:11.226001   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:17.305964   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:20.377963   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:24.729385   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:57:24.729659   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:57:24.729688   70393 kubeadm.go:309] 
	I0528 21:57:24.729745   70393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:57:24.729835   70393 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:57:24.729856   70393 kubeadm.go:309] 
	I0528 21:57:24.729898   70393 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:57:24.729930   70393 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:57:24.730023   70393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:57:24.730030   70393 kubeadm.go:309] 
	I0528 21:57:24.730156   70393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:57:24.730212   70393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:57:24.730267   70393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:57:24.730278   70393 kubeadm.go:309] 
	I0528 21:57:24.730403   70393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:57:24.730522   70393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:57:24.730533   70393 kubeadm.go:309] 
	I0528 21:57:24.730669   70393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:57:24.730788   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:57:24.730899   70393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:57:24.731020   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:57:24.731039   70393 kubeadm.go:309] 
	I0528 21:57:24.731657   70393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:57:24.731752   70393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:57:24.731861   70393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0528 21:57:24.731942   70393 kubeadm.go:393] duration metric: took 7m57.905523124s to StartCluster
	I0528 21:57:24.731997   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:57:24.732064   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:57:24.772889   70393 cri.go:89] found id: ""
	I0528 21:57:24.772916   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.772923   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:57:24.772929   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:57:24.772988   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:57:24.806418   70393 cri.go:89] found id: ""
	I0528 21:57:24.806447   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.806458   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:57:24.806467   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:57:24.806534   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:57:24.844994   70393 cri.go:89] found id: ""
	I0528 21:57:24.845020   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.845028   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:57:24.845035   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:57:24.845098   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:57:24.880517   70393 cri.go:89] found id: ""
	I0528 21:57:24.880547   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.880558   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:57:24.880566   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:57:24.880615   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:57:24.917534   70393 cri.go:89] found id: ""
	I0528 21:57:24.917561   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.917569   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:57:24.917575   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:57:24.917624   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:57:24.952898   70393 cri.go:89] found id: ""
	I0528 21:57:24.952929   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.952940   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:57:24.952948   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:57:24.953011   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:57:24.994957   70393 cri.go:89] found id: ""
	I0528 21:57:24.994983   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.994990   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:57:24.994996   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:57:24.995046   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:57:25.032594   70393 cri.go:89] found id: ""
	I0528 21:57:25.032617   70393 logs.go:276] 0 containers: []
	W0528 21:57:25.032624   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:57:25.032633   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:57:25.032645   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:57:25.112858   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:57:25.112882   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:57:25.112894   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:57:25.217748   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:57:25.217792   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:57:25.289998   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:57:25.290035   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:57:25.344833   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:57:25.344868   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0528 21:57:25.360547   70393 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0528 21:57:25.360594   70393 out.go:239] * 
	W0528 21:57:25.360659   70393 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:57:25.360693   70393 out.go:239] * 
	W0528 21:57:25.361545   70393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 21:57:25.365387   70393 out.go:177] 
	W0528 21:57:25.366681   70393 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:57:25.366731   70393 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0528 21:57:25.366772   70393 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0528 21:57:25.369011   70393 out.go:177] 
	I0528 21:57:26.462093   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:29.530040   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:35.610027   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:38.682076   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:44.762057   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:47.838109   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:53.914000   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:56.986078   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:03.066042   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:06.138002   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:12.218031   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:15.290043   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:18.290952   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:58:18.291006   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:18.291338   73188 buildroot.go:166] provisioning hostname "default-k8s-diff-port-249165"
	I0528 21:58:18.291363   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:18.291646   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:18.293181   73188 machine.go:97] duration metric: took 4m37.423637232s to provisionDockerMachine
	I0528 21:58:18.293224   73188 fix.go:56] duration metric: took 4m37.444947597s for fixHost
	I0528 21:58:18.293230   73188 start.go:83] releasing machines lock for "default-k8s-diff-port-249165", held for 4m37.444964638s
	W0528 21:58:18.293245   73188 start.go:713] error starting host: provision: host is not running
	W0528 21:58:18.293337   73188 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0528 21:58:18.293346   73188 start.go:728] Will try again in 5 seconds ...
	I0528 21:58:23.295554   73188 start.go:360] acquireMachinesLock for default-k8s-diff-port-249165: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:58:23.295664   73188 start.go:364] duration metric: took 68.737µs to acquireMachinesLock for "default-k8s-diff-port-249165"
	I0528 21:58:23.295686   73188 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:58:23.295692   73188 fix.go:54] fixHost starting: 
	I0528 21:58:23.296036   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:58:23.296059   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:58:23.310971   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0528 21:58:23.311354   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:58:23.311769   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:58:23.311791   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:58:23.312072   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:58:23.312279   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:23.312406   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 21:58:23.313815   73188 fix.go:112] recreateIfNeeded on default-k8s-diff-port-249165: state=Stopped err=<nil>
	I0528 21:58:23.313837   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	W0528 21:58:23.313981   73188 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:58:23.315867   73188 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-249165" ...
	I0528 21:58:23.317068   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Start
	I0528 21:58:23.317224   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Ensuring networks are active...
	I0528 21:58:23.317939   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Ensuring network default is active
	I0528 21:58:23.318317   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Ensuring network mk-default-k8s-diff-port-249165 is active
	I0528 21:58:23.318787   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Getting domain xml...
	I0528 21:58:23.319512   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Creating domain...
	I0528 21:58:24.556897   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting to get IP...
	I0528 21:58:24.557688   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.558217   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.558288   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:24.558188   74350 retry.go:31] will retry after 274.96624ms: waiting for machine to come up
	I0528 21:58:24.834950   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.835591   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.835621   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:24.835547   74350 retry.go:31] will retry after 271.693151ms: waiting for machine to come up
	I0528 21:58:25.109193   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.109736   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.109782   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:25.109675   74350 retry.go:31] will retry after 381.434148ms: waiting for machine to come up
	I0528 21:58:25.493383   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.493853   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.493880   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:25.493784   74350 retry.go:31] will retry after 384.034489ms: waiting for machine to come up
	I0528 21:58:25.879289   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.879822   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.879854   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:25.879749   74350 retry.go:31] will retry after 517.483073ms: waiting for machine to come up
	I0528 21:58:26.398450   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:26.399012   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:26.399089   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:26.399010   74350 retry.go:31] will retry after 757.371702ms: waiting for machine to come up
	I0528 21:58:27.157490   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:27.158014   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:27.158044   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:27.157971   74350 retry.go:31] will retry after 1.042611523s: waiting for machine to come up
	I0528 21:58:28.201704   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:28.202196   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:28.202229   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:28.202140   74350 retry.go:31] will retry after 1.287212665s: waiting for machine to come up
	I0528 21:58:29.490908   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:29.491356   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:29.491386   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:29.491287   74350 retry.go:31] will retry after 1.576442022s: waiting for machine to come up
	I0528 21:58:31.069493   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:31.069966   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:31.069995   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:31.069917   74350 retry.go:31] will retry after 2.245383669s: waiting for machine to come up
	I0528 21:58:33.317217   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:33.317670   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:33.317701   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:33.317608   74350 retry.go:31] will retry after 2.415705908s: waiting for machine to come up
	I0528 21:58:35.736148   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:35.736526   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:35.736549   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:35.736486   74350 retry.go:31] will retry after 3.463330934s: waiting for machine to come up
	I0528 21:58:39.201369   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:39.201852   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:39.201885   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:39.201819   74350 retry.go:31] will retry after 4.496481714s: waiting for machine to come up
	I0528 21:58:43.699313   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.699760   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Found IP for machine: 192.168.72.48
	I0528 21:58:43.699783   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Reserving static IP address...
	I0528 21:58:43.699801   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has current primary IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.700262   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Reserved static IP address: 192.168.72.48
	I0528 21:58:43.700280   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for SSH to be available...
	I0528 21:58:43.700295   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-249165", mac: "52:54:00:f4:fc:a4", ip: "192.168.72.48"} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.700339   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | skip adding static IP to network mk-default-k8s-diff-port-249165 - found existing host DHCP lease matching {name: "default-k8s-diff-port-249165", mac: "52:54:00:f4:fc:a4", ip: "192.168.72.48"}
	I0528 21:58:43.700362   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Getting to WaitForSSH function...
	I0528 21:58:43.702496   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.702910   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.702941   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.703104   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Using SSH client type: external
	I0528 21:58:43.703126   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa (-rw-------)
	I0528 21:58:43.703169   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 21:58:43.703185   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | About to run SSH command:
	I0528 21:58:43.703211   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | exit 0
	I0528 21:58:43.825921   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | SSH cmd err, output: <nil>: 
	I0528 21:58:43.826314   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetConfigRaw
	I0528 21:58:43.826989   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:43.829337   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.829663   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.829685   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.829993   73188 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/config.json ...
	I0528 21:58:43.830227   73188 machine.go:94] provisionDockerMachine start ...
	I0528 21:58:43.830259   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:43.830499   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:43.832840   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.833193   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.833222   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.833382   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:43.833551   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.833687   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.833820   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:43.833977   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:43.834147   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:43.834156   73188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:58:43.938159   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 21:58:43.938191   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:43.938426   73188 buildroot.go:166] provisioning hostname "default-k8s-diff-port-249165"
	I0528 21:58:43.938472   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:43.938684   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:43.941594   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.941986   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.942016   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.942195   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:43.942393   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.942550   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.942742   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:43.942913   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:43.943069   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:43.943082   73188 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-249165 && echo "default-k8s-diff-port-249165" | sudo tee /etc/hostname
	I0528 21:58:44.060923   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-249165
	
	I0528 21:58:44.060955   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.063621   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.063974   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.064008   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.064132   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.064326   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.064508   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.064660   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.064818   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:44.064999   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:44.065016   73188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-249165' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-249165/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-249165' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:58:44.174464   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:58:44.174491   73188 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:58:44.174524   73188 buildroot.go:174] setting up certificates
	I0528 21:58:44.174538   73188 provision.go:84] configureAuth start
	I0528 21:58:44.174549   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:44.174838   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:44.177623   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.178024   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.178052   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.178250   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.180956   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.181305   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.181334   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.181500   73188 provision.go:143] copyHostCerts
	I0528 21:58:44.181571   73188 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:58:44.181582   73188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:58:44.181643   73188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:58:44.181753   73188 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:58:44.181787   73188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:58:44.181819   73188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:58:44.181892   73188 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:58:44.181899   73188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:58:44.181920   73188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:58:44.181984   73188 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-249165 san=[127.0.0.1 192.168.72.48 default-k8s-diff-port-249165 localhost minikube]
	I0528 21:58:44.490074   73188 provision.go:177] copyRemoteCerts
	I0528 21:58:44.490127   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:58:44.490150   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.492735   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.493121   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.493156   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.493306   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.493526   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.493690   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.493845   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:44.575620   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 21:58:44.601185   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:58:44.625266   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0528 21:58:44.648243   73188 provision.go:87] duration metric: took 473.69068ms to configureAuth
	I0528 21:58:44.648271   73188 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:58:44.648430   73188 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:58:44.648502   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.651430   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.651793   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.651820   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.651960   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.652140   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.652277   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.652436   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.652592   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:44.652762   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:44.652777   73188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:58:44.923577   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:58:44.923597   73188 machine.go:97] duration metric: took 1.093358522s to provisionDockerMachine
	I0528 21:58:44.923607   73188 start.go:293] postStartSetup for "default-k8s-diff-port-249165" (driver="kvm2")
	I0528 21:58:44.923618   73188 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:58:44.923649   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:44.924030   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:58:44.924124   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.926704   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.927009   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.927038   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.927162   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.927347   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.927491   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.927627   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:45.009429   73188 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:58:45.014007   73188 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 21:58:45.014032   73188 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:58:45.014094   73188 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:58:45.014161   73188 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:58:45.014265   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:58:45.024039   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:58:45.050461   73188 start.go:296] duration metric: took 126.842658ms for postStartSetup
	I0528 21:58:45.050497   73188 fix.go:56] duration metric: took 21.754803931s for fixHost
	I0528 21:58:45.050519   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:45.053312   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.053639   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.053671   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.053821   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:45.054025   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.054198   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.054339   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:45.054475   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:45.054646   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:45.054657   73188 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 21:58:45.159430   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716933525.136417037
	
	I0528 21:58:45.159460   73188 fix.go:216] guest clock: 1716933525.136417037
	I0528 21:58:45.159470   73188 fix.go:229] Guest: 2024-05-28 21:58:45.136417037 +0000 UTC Remote: 2024-05-28 21:58:45.05050169 +0000 UTC m=+304.341994853 (delta=85.915347ms)
	I0528 21:58:45.159495   73188 fix.go:200] guest clock delta is within tolerance: 85.915347ms
	I0528 21:58:45.159502   73188 start.go:83] releasing machines lock for "default-k8s-diff-port-249165", held for 21.863825672s
	I0528 21:58:45.159552   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.159830   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:45.162709   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.163053   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.163089   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.163264   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.163717   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.163931   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.164028   73188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:58:45.164072   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:45.164139   73188 ssh_runner.go:195] Run: cat /version.json
	I0528 21:58:45.164164   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:45.167063   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167215   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167477   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.167505   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167534   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.167551   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167605   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:45.167811   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:45.167826   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.167992   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:45.167998   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.168132   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:45.168152   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:45.168279   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:45.243473   73188 ssh_runner.go:195] Run: systemctl --version
	I0528 21:58:45.275272   73188 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:58:45.416616   73188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 21:58:45.423144   73188 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:58:45.423203   73188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:58:45.438939   73188 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 21:58:45.438963   73188 start.go:494] detecting cgroup driver to use...
	I0528 21:58:45.439035   73188 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:58:45.454944   73188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:58:45.469976   73188 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:58:45.470031   73188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:58:45.484152   73188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:58:45.497541   73188 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:58:45.622055   73188 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:58:45.760388   73188 docker.go:233] disabling docker service ...
	I0528 21:58:45.760472   73188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:58:45.779947   73188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:58:45.794310   73188 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:58:45.926921   73188 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:58:46.042042   73188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:58:46.055486   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:58:46.074285   73188 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 21:58:46.074347   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.084646   73188 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:58:46.084709   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.094701   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.104877   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.115549   73188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:58:46.125973   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.136293   73188 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.153570   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.165428   73188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:58:46.175167   73188 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 21:58:46.175224   73188 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 21:58:46.189687   73188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:58:46.199630   73188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:58:46.322596   73188 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:58:46.465841   73188 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:58:46.465905   73188 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:58:46.471249   73188 start.go:562] Will wait 60s for crictl version
	I0528 21:58:46.471301   73188 ssh_runner.go:195] Run: which crictl
	I0528 21:58:46.474963   73188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:58:46.514028   73188 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 21:58:46.514111   73188 ssh_runner.go:195] Run: crio --version
	I0528 21:58:46.544060   73188 ssh_runner.go:195] Run: crio --version
	I0528 21:58:46.577448   73188 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 21:58:46.578815   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:46.581500   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:46.581876   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:46.581918   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:46.582081   73188 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0528 21:58:46.586277   73188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:58:46.599163   73188 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:58:46.599265   73188 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:58:46.599308   73188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:58:46.636824   73188 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0528 21:58:46.636895   73188 ssh_runner.go:195] Run: which lz4
	I0528 21:58:46.640890   73188 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 21:58:46.645433   73188 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 21:58:46.645457   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0528 21:58:48.069572   73188 crio.go:462] duration metric: took 1.428706508s to copy over tarball
	I0528 21:58:48.069660   73188 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 21:58:50.289428   73188 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.2197347s)
	I0528 21:58:50.289459   73188 crio.go:469] duration metric: took 2.219854472s to extract the tarball
	I0528 21:58:50.289466   73188 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 21:58:50.329649   73188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:58:50.373900   73188 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:58:50.373922   73188 cache_images.go:84] Images are preloaded, skipping loading
	I0528 21:58:50.373928   73188 kubeadm.go:928] updating node { 192.168.72.48 8444 v1.30.1 crio true true} ...
	I0528 21:58:50.374059   73188 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-249165 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:58:50.374142   73188 ssh_runner.go:195] Run: crio config
	I0528 21:58:50.430538   73188 cni.go:84] Creating CNI manager for ""
	I0528 21:58:50.430573   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:58:50.430590   73188 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:58:50.430618   73188 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.48 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-249165 NodeName:default-k8s-diff-port-249165 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 21:58:50.430754   73188 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.48
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-249165"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:58:50.430822   73188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 21:58:50.440906   73188 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:58:50.440961   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:58:50.450354   73188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0528 21:58:50.467008   73188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:58:50.483452   73188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0528 21:58:50.500551   73188 ssh_runner.go:195] Run: grep 192.168.72.48	control-plane.minikube.internal$ /etc/hosts
	I0528 21:58:50.504597   73188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:58:50.516659   73188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:58:50.634433   73188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:58:50.651819   73188 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165 for IP: 192.168.72.48
	I0528 21:58:50.651844   73188 certs.go:194] generating shared ca certs ...
	I0528 21:58:50.651868   73188 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:58:50.652040   73188 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 21:58:50.652109   73188 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 21:58:50.652124   73188 certs.go:256] generating profile certs ...
	I0528 21:58:50.652223   73188 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/client.key
	I0528 21:58:50.652298   73188 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/apiserver.key.3e2f4fca
	I0528 21:58:50.652351   73188 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/proxy-client.key
	I0528 21:58:50.652505   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 21:58:50.652546   73188 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 21:58:50.652558   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 21:58:50.652589   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 21:58:50.652617   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:58:50.652645   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 21:58:50.652687   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:58:50.653356   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:58:50.687329   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:58:50.731844   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:58:50.758921   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:58:50.793162   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0528 21:58:50.820772   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 21:58:50.849830   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:58:50.875695   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 21:58:50.900876   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 21:58:50.925424   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:58:50.949453   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 21:58:50.973597   73188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:58:50.990297   73188 ssh_runner.go:195] Run: openssl version
	I0528 21:58:50.996164   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 21:58:51.007959   73188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 21:58:51.012987   73188 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:58:51.013062   73188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 21:58:51.019526   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 21:58:51.031068   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 21:58:51.043064   73188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 21:58:51.048507   73188 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:58:51.048600   73188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 21:58:51.054818   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 21:58:51.065829   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:58:51.076414   73188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:58:51.081090   73188 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:58:51.081141   73188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:58:51.086736   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:58:51.096968   73188 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:58:51.101288   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 21:58:51.107082   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 21:58:51.112759   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 21:58:51.118504   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 21:58:51.124067   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 21:58:51.129783   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 21:58:51.135390   73188 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:58:51.135521   73188 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:58:51.135583   73188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:58:51.173919   73188 cri.go:89] found id: ""
	I0528 21:58:51.173995   73188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0528 21:58:51.184361   73188 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 21:58:51.184381   73188 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 21:58:51.184386   73188 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 21:58:51.184424   73188 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 21:58:51.194386   73188 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:58:51.195726   73188 kubeconfig.go:125] found "default-k8s-diff-port-249165" server: "https://192.168.72.48:8444"
	I0528 21:58:51.198799   73188 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 21:58:51.208118   73188 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.48
	I0528 21:58:51.208146   73188 kubeadm.go:1154] stopping kube-system containers ...
	I0528 21:58:51.208157   73188 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0528 21:58:51.208193   73188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:58:51.252026   73188 cri.go:89] found id: ""
	I0528 21:58:51.252089   73188 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 21:58:51.269404   73188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:58:51.279728   73188 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:58:51.279744   73188 kubeadm.go:156] found existing configuration files:
	
	I0528 21:58:51.279790   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0528 21:58:51.289352   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:58:51.289396   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:58:51.299059   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0528 21:58:51.308375   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:58:51.308425   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:58:51.317866   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0528 21:58:51.327433   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:58:51.327488   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:58:51.337148   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0528 21:58:51.346358   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:58:51.346410   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:58:51.355689   73188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:58:51.365235   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:51.488772   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.553360   73188 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.064544437s)
	I0528 21:58:52.553398   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.780281   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.839188   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.914117   73188 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:58:52.914222   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:58:53.415170   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:58:53.914987   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:58:53.933842   73188 api_server.go:72] duration metric: took 1.019725255s to wait for apiserver process to appear ...
	I0528 21:58:53.933869   73188 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:58:53.933886   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:53.934358   73188 api_server.go:269] stopped: https://192.168.72.48:8444/healthz: Get "https://192.168.72.48:8444/healthz": dial tcp 192.168.72.48:8444: connect: connection refused
	I0528 21:58:54.434146   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:56.813345   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:58:56.813384   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:58:56.813396   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:56.821906   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:58:56.821935   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:58:56.934069   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:56.941002   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:56.941034   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:57.434777   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:57.439312   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:57.439345   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:57.934912   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:57.941171   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:57.941201   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:58.434198   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:58.438164   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:58.438190   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:58.934813   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:58.939873   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:58.939899   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:59.434373   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:59.438639   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:59.438662   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:59.934909   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:59.940297   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:59.940331   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:59:00.434920   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:59:00.440734   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 200:
	ok
	I0528 21:59:00.447107   73188 api_server.go:141] control plane version: v1.30.1
	I0528 21:59:00.447129   73188 api_server.go:131] duration metric: took 6.513254325s to wait for apiserver health ...
	I0528 21:59:00.447137   73188 cni.go:84] Creating CNI manager for ""
	I0528 21:59:00.447143   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:59:00.449008   73188 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 21:59:00.450184   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 21:59:00.461520   73188 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 21:59:00.480494   73188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:59:00.491722   73188 system_pods.go:59] 8 kube-system pods found
	I0528 21:59:00.491755   73188 system_pods.go:61] "coredns-7db6d8ff4d-qk6tz" [d3250a5a-2eda-41d3-86e2-227e85da8cb6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 21:59:00.491764   73188 system_pods.go:61] "etcd-default-k8s-diff-port-249165" [e1179b11-47b9-4803-91bb-a8d8470aac40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 21:59:00.491771   73188 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-249165" [7f6c0680-8827-4f15-90e5-f8d9e1d1bc8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 21:59:00.491780   73188 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-249165" [4d6f8bb3-0f4b-41fa-9b02-3b2c79513bf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 21:59:00.491786   73188 system_pods.go:61] "kube-proxy-fvmjv" [df55e25a-a79a-4293-9636-31f5ebc4fc77] Running
	I0528 21:59:00.491791   73188 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-249165" [82200561-6687-448d-b73f-d0e047dec773] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 21:59:00.491797   73188 system_pods.go:61] "metrics-server-569cc877fc-k2q4p" [d1ec23de-6293-42a8-80f3-e28e007b6a34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:59:00.491802   73188 system_pods.go:61] "storage-provisioner" [1f84dc9c-6b4e-44c9-82a2-5dabcb0b2178] Running
	I0528 21:59:00.491808   73188 system_pods.go:74] duration metric: took 11.287283ms to wait for pod list to return data ...
	I0528 21:59:00.491817   73188 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:59:00.495098   73188 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:59:00.495124   73188 node_conditions.go:123] node cpu capacity is 2
	I0528 21:59:00.495135   73188 node_conditions.go:105] duration metric: took 3.313626ms to run NodePressure ...
	I0528 21:59:00.495151   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:59:00.782161   73188 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0528 21:59:00.786287   73188 kubeadm.go:733] kubelet initialised
	I0528 21:59:00.786308   73188 kubeadm.go:734] duration metric: took 4.112496ms waiting for restarted kubelet to initialise ...
	I0528 21:59:00.786316   73188 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:59:00.790951   73188 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.795459   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.795486   73188 pod_ready.go:81] duration metric: took 4.510715ms for pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.795496   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.795505   73188 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.799372   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.799395   73188 pod_ready.go:81] duration metric: took 3.878119ms for pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.799405   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.799412   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.803708   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.803732   73188 pod_ready.go:81] duration metric: took 4.312817ms for pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.803744   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.803752   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.883526   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.883552   73188 pod_ready.go:81] duration metric: took 79.787719ms for pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.883562   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.883569   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fvmjv" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:01.284553   73188 pod_ready.go:92] pod "kube-proxy-fvmjv" in "kube-system" namespace has status "Ready":"True"
	I0528 21:59:01.284580   73188 pod_ready.go:81] duration metric: took 401.003384ms for pod "kube-proxy-fvmjv" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:01.284590   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:03.293222   73188 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:04.291145   73188 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"True"
	I0528 21:59:04.291171   73188 pod_ready.go:81] duration metric: took 3.006571778s for pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:04.291183   73188 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:06.297256   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:08.299092   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:10.797261   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:12.797546   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:15.297532   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:17.297769   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:19.298152   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:21.797794   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:24.298073   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:26.797503   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:29.297699   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:31.298091   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:33.799278   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:36.298358   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:38.298659   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:40.797501   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:43.297098   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:45.297322   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:47.798004   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:49.798749   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:52.296950   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:54.297779   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:56.297921   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:58.797953   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:01.297566   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:03.302555   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:05.797610   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:07.797893   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:09.798237   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:12.297953   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:14.298232   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:16.798660   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:19.296867   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:21.297325   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:23.797687   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:26.298657   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:28.798073   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:31.299219   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:33.800018   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:36.297914   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:38.297984   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:40.796919   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:42.798156   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:44.800231   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:47.297425   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:49.800316   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:52.297415   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:54.297549   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:56.798787   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:59.297851   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:01.298008   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:03.298732   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:05.797817   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:07.797913   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:10.297286   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:12.797866   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:14.799144   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:17.297592   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:19.298065   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:21.797973   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:23.798794   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:26.298087   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:28.300587   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:30.797976   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:33.297574   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:35.298403   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:37.797436   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:40.300414   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:42.797172   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:45.297340   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:47.297684   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:49.298815   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:51.299597   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:53.798447   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:56.297483   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:58.298264   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:00.798507   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:03.297276   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:05.299518   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:07.799770   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:10.300402   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:12.796971   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:14.798057   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:16.798315   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:18.800481   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:21.298816   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:23.797133   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:25.798165   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:28.297030   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:30.797031   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:32.797960   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:34.798334   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:37.298013   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:39.797122   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.100634668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=932f47d2-1159-42fb-8398-35e6c2445d72 name=/runtime.v1.RuntimeService/Version
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.101697802Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14e1effb-99b7-4343-ab05-e8d5006f391b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.102135514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933763102116365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14e1effb-99b7-4343-ab05-e8d5006f391b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.102717512Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ade7754-84d7-4780-8e0e-be0c0d6dba99 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.102781646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ade7754-84d7-4780-8e0e-be0c0d6dba99 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.103095864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6334a28f9d29a95b433857aede8e5afb904c6d2e764f1afa093ca5e7c09de09,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716932983414073366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17521a4ecdae6117bdf145c9974f8c008f247f6115ecbd86caf00c69bc3a76ab,PodSandboxId:db5c23ed716b1d80aaaaff9e0c885d6269ef63f81b2cbc1c50f718374f8be9e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716932971881531924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b75037d-627f-4727-8935-8b459c226fe7,},Annotations:map[string]string{io.kubernetes.container.hash: 181f13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da18d6d5334d9662f0b6045799eb8276b1589a4cdce01aac8f18b05145d94b8e,PodSandboxId:5757eebcac3fec427adff473a5345464791a98c28d6d27a92f35ac4e3e1eeaa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716932968380358567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8cb7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3908d89-cfc6-4f1a-9aef-861aac0d3e29,},Annotations:map[string]string{io.kubernetes.container.hash: 1c6a8418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716932952735350469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc,PodSandboxId:01674a6515d0f2168d66e5d53a45a0b9da95b3f7349a36404ab03e925d034d82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716932952694001749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pnl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c2c68bc-42c2-425e-ae35-a8c07b5d5
221,},Annotations:map[string]string{io.kubernetes.container.hash: 29e7a296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622,PodSandboxId:1a57d3e3c4369791d819c0931e62e61d6cf80db256612b5ba0e89273ed65e27a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716932948909493688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379646dca49871cf019f010941906ede,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2e2ed24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b2876b1f3db864aa0e6c3000bdfb694e988ccb6c3ffdc56053e0547989c5e5,PodSandboxId:56df0f463ab2ed29cc0ec6c5168b8f676efca7b0d53aeddde04c3c7791a677eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932948904428769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c461cb87b5b1c21ce42827beca6c1ef1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c,PodSandboxId:be288742c1e8abcc613b0f3fa06841cc10d07835180f68f5b65c15201e80f32a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716932948891115178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d43b1ed5aca63e62d2ff5a84cd7e44,},Annotations:map[string]string{io.kubernetes.container.hash: e
e247d8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89,PodSandboxId:2f4260e33a3bebe1e487a02c066cc93486631d5232dd87bf02ab3d9e896353e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716932948902903465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e976cff78f1a85f2cc285af7b550e6b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ade7754-84d7-4780-8e0e-be0c0d6dba99 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.149275248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2057e55c-f989-4aaf-be0d-6615a373512d name=/runtime.v1.RuntimeService/Version
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.149359523Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2057e55c-f989-4aaf-be0d-6615a373512d name=/runtime.v1.RuntimeService/Version
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.150626301Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb05e4ba-a6e5-4458-a339-a645f28ef27b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.150995951Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933763150975721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb05e4ba-a6e5-4458-a339-a645f28ef27b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.151729271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb0ae6f2-d2b1-434a-997b-407d5e3fc18b name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.151792019Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb0ae6f2-d2b1-434a-997b-407d5e3fc18b name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.151982876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6334a28f9d29a95b433857aede8e5afb904c6d2e764f1afa093ca5e7c09de09,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716932983414073366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17521a4ecdae6117bdf145c9974f8c008f247f6115ecbd86caf00c69bc3a76ab,PodSandboxId:db5c23ed716b1d80aaaaff9e0c885d6269ef63f81b2cbc1c50f718374f8be9e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716932971881531924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b75037d-627f-4727-8935-8b459c226fe7,},Annotations:map[string]string{io.kubernetes.container.hash: 181f13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da18d6d5334d9662f0b6045799eb8276b1589a4cdce01aac8f18b05145d94b8e,PodSandboxId:5757eebcac3fec427adff473a5345464791a98c28d6d27a92f35ac4e3e1eeaa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716932968380358567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8cb7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3908d89-cfc6-4f1a-9aef-861aac0d3e29,},Annotations:map[string]string{io.kubernetes.container.hash: 1c6a8418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716932952735350469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc,PodSandboxId:01674a6515d0f2168d66e5d53a45a0b9da95b3f7349a36404ab03e925d034d82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716932952694001749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pnl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c2c68bc-42c2-425e-ae35-a8c07b5d5
221,},Annotations:map[string]string{io.kubernetes.container.hash: 29e7a296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622,PodSandboxId:1a57d3e3c4369791d819c0931e62e61d6cf80db256612b5ba0e89273ed65e27a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716932948909493688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379646dca49871cf019f010941906ede,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2e2ed24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b2876b1f3db864aa0e6c3000bdfb694e988ccb6c3ffdc56053e0547989c5e5,PodSandboxId:56df0f463ab2ed29cc0ec6c5168b8f676efca7b0d53aeddde04c3c7791a677eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932948904428769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c461cb87b5b1c21ce42827beca6c1ef1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c,PodSandboxId:be288742c1e8abcc613b0f3fa06841cc10d07835180f68f5b65c15201e80f32a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716932948891115178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d43b1ed5aca63e62d2ff5a84cd7e44,},Annotations:map[string]string{io.kubernetes.container.hash: e
e247d8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89,PodSandboxId:2f4260e33a3bebe1e487a02c066cc93486631d5232dd87bf02ab3d9e896353e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716932948902903465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e976cff78f1a85f2cc285af7b550e6b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb0ae6f2-d2b1-434a-997b-407d5e3fc18b name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.191684177Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a2582bc-0362-4367-8ddc-97f12fa0ab49 name=/runtime.v1.RuntimeService/Version
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.191800212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a2582bc-0362-4367-8ddc-97f12fa0ab49 name=/runtime.v1.RuntimeService/Version
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.193075376Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b874432-6a5a-46c0-a027-5e6f5ac426f6 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.193648437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933763193555742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b874432-6a5a-46c0-a027-5e6f5ac426f6 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.194247787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66ba587c-f959-4f1e-a71f-69d32ae9c99e name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.194334090Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66ba587c-f959-4f1e-a71f-69d32ae9c99e name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.194660549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6334a28f9d29a95b433857aede8e5afb904c6d2e764f1afa093ca5e7c09de09,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716932983414073366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17521a4ecdae6117bdf145c9974f8c008f247f6115ecbd86caf00c69bc3a76ab,PodSandboxId:db5c23ed716b1d80aaaaff9e0c885d6269ef63f81b2cbc1c50f718374f8be9e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716932971881531924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b75037d-627f-4727-8935-8b459c226fe7,},Annotations:map[string]string{io.kubernetes.container.hash: 181f13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da18d6d5334d9662f0b6045799eb8276b1589a4cdce01aac8f18b05145d94b8e,PodSandboxId:5757eebcac3fec427adff473a5345464791a98c28d6d27a92f35ac4e3e1eeaa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716932968380358567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8cb7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3908d89-cfc6-4f1a-9aef-861aac0d3e29,},Annotations:map[string]string{io.kubernetes.container.hash: 1c6a8418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716932952735350469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc,PodSandboxId:01674a6515d0f2168d66e5d53a45a0b9da95b3f7349a36404ab03e925d034d82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716932952694001749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pnl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c2c68bc-42c2-425e-ae35-a8c07b5d5
221,},Annotations:map[string]string{io.kubernetes.container.hash: 29e7a296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622,PodSandboxId:1a57d3e3c4369791d819c0931e62e61d6cf80db256612b5ba0e89273ed65e27a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716932948909493688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379646dca49871cf019f010941906ede,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2e2ed24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b2876b1f3db864aa0e6c3000bdfb694e988ccb6c3ffdc56053e0547989c5e5,PodSandboxId:56df0f463ab2ed29cc0ec6c5168b8f676efca7b0d53aeddde04c3c7791a677eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932948904428769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c461cb87b5b1c21ce42827beca6c1ef1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c,PodSandboxId:be288742c1e8abcc613b0f3fa06841cc10d07835180f68f5b65c15201e80f32a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716932948891115178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d43b1ed5aca63e62d2ff5a84cd7e44,},Annotations:map[string]string{io.kubernetes.container.hash: e
e247d8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89,PodSandboxId:2f4260e33a3bebe1e487a02c066cc93486631d5232dd87bf02ab3d9e896353e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716932948902903465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e976cff78f1a85f2cc285af7b550e6b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66ba587c-f959-4f1e-a71f-69d32ae9c99e name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.214196812Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=4b6c461f-5a3a-4f66-a5c5-c3ade85fe998 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.214470110Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:db5c23ed716b1d80aaaaff9e0c885d6269ef63f81b2cbc1c50f718374f8be9e4,Metadata:&PodSandboxMetadata{Name:busybox,Uid:1b75037d-627f-4727-8935-8b459c226fe7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716932968338949243,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b75037d-627f-4727-8935-8b459c226fe7,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-28T21:49:12.213555212Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5757eebcac3fec427adff473a5345464791a98c28d6d27a92f35ac4e3e1eeaa7,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cb7b,Uid:b3908d89-cfc6-4f1a-9aef-861aac0d3e29,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716932968041901
659,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-8cb7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3908d89-cfc6-4f1a-9aef-861aac0d3e29,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-28T21:49:12.213612142Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a30fbece495cba9b920525ca9f325c0503cfaf34c7cf9ec58de0fe7b7dd5d1ec,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-f6fz2,Uid:b5e432cd-3b95-4f20-b9b3-c498512a7564,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716932960244439003,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-f6fz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e432cd-3b95-4f20-b9b3-c498512a7564,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-28T21:49:12.
213560160Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01674a6515d0f2168d66e5d53a45a0b9da95b3f7349a36404ab03e925d034d82,Metadata:&PodSandboxMetadata{Name:kube-proxy-pnl5w,Uid:9c2c68bc-42c2-425e-ae35-a8c07b5d5221,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716932952530098400,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pnl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c2c68bc-42c2-425e-ae35-a8c07b5d5221,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-28T21:49:12.213546996Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7bf52279-1fbc-40e5-8376-992c545c55dd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716932952529057550,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2024-05-28T21:49:12.213553749Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:56df0f463ab2ed29cc0ec6c5168b8f676efca7b0d53aeddde04c3c7791a677eb,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-595279,Uid:c461cb87b5b1c21ce42827beca6c1ef1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716932948663447956,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c461cb87b5b1c21ce42827beca6c1ef1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c461cb87b5b1c21ce42827beca6c1ef1,kubernetes.io/config.seen: 2024-05-28T21:49:08.166384748Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2f4260e33a3bebe1e487a02c066cc93486631d5232dd87bf02ab3d9e896353e4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-595279,Uid:0e976cff78f1a85f2cc285af7b550e6
b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716932948662147948,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e976cff78f1a85f2cc285af7b550e6b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0e976cff78f1a85f2cc285af7b550e6b,kubernetes.io/config.seen: 2024-05-28T21:49:08.166383781Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:be288742c1e8abcc613b0f3fa06841cc10d07835180f68f5b65c15201e80f32a,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-595279,Uid:d8d43b1ed5aca63e62d2ff5a84cd7e44,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716932948656740949,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d43b1ed5
aca63e62d2ff5a84cd7e44,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.79:2379,kubernetes.io/config.hash: d8d43b1ed5aca63e62d2ff5a84cd7e44,kubernetes.io/config.seen: 2024-05-28T21:49:08.226662312Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1a57d3e3c4369791d819c0931e62e61d6cf80db256612b5ba0e89273ed65e27a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-595279,Uid:379646dca49871cf019f010941906ede,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1716932948656016772,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379646dca49871cf019f010941906ede,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.79:8443,kubernetes.io/config.hash: 379646dca49871cf019f01094190
6ede,kubernetes.io/config.seen: 2024-05-28T21:49:08.166379651Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4b6c461f-5a3a-4f66-a5c5-c3ade85fe998 name=/runtime.v1.RuntimeService/ListPodSandbox
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.215348972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af2aae22-ece7-4d8d-b4f8-8bca13a8742c name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.215432198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af2aae22-ece7-4d8d-b4f8-8bca13a8742c name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:02:43 embed-certs-595279 crio[723]: time="2024-05-28 22:02:43.215702440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6334a28f9d29a95b433857aede8e5afb904c6d2e764f1afa093ca5e7c09de09,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716932983414073366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17521a4ecdae6117bdf145c9974f8c008f247f6115ecbd86caf00c69bc3a76ab,PodSandboxId:db5c23ed716b1d80aaaaff9e0c885d6269ef63f81b2cbc1c50f718374f8be9e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716932971881531924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b75037d-627f-4727-8935-8b459c226fe7,},Annotations:map[string]string{io.kubernetes.container.hash: 181f13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da18d6d5334d9662f0b6045799eb8276b1589a4cdce01aac8f18b05145d94b8e,PodSandboxId:5757eebcac3fec427adff473a5345464791a98c28d6d27a92f35ac4e3e1eeaa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716932968380358567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8cb7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3908d89-cfc6-4f1a-9aef-861aac0d3e29,},Annotations:map[string]string{io.kubernetes.container.hash: 1c6a8418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716932952735350469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc,PodSandboxId:01674a6515d0f2168d66e5d53a45a0b9da95b3f7349a36404ab03e925d034d82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716932952694001749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pnl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c2c68bc-42c2-425e-ae35-a8c07b5d5
221,},Annotations:map[string]string{io.kubernetes.container.hash: 29e7a296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622,PodSandboxId:1a57d3e3c4369791d819c0931e62e61d6cf80db256612b5ba0e89273ed65e27a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716932948909493688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379646dca49871cf019f010941906ede,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2e2ed24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b2876b1f3db864aa0e6c3000bdfb694e988ccb6c3ffdc56053e0547989c5e5,PodSandboxId:56df0f463ab2ed29cc0ec6c5168b8f676efca7b0d53aeddde04c3c7791a677eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932948904428769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c461cb87b5b1c21ce42827beca6c1ef1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c,PodSandboxId:be288742c1e8abcc613b0f3fa06841cc10d07835180f68f5b65c15201e80f32a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716932948891115178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d43b1ed5aca63e62d2ff5a84cd7e44,},Annotations:map[string]string{io.kubernetes.container.hash: e
e247d8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89,PodSandboxId:2f4260e33a3bebe1e487a02c066cc93486631d5232dd87bf02ab3d9e896353e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716932948902903465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e976cff78f1a85f2cc285af7b550e6b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af2aae22-ece7-4d8d-b4f8-8bca13a8742c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c6334a28f9d29       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   42fdb7574da76       storage-provisioner
	17521a4ecdae6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   db5c23ed716b1       busybox
	da18d6d5334d9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   5757eebcac3fe       coredns-7db6d8ff4d-8cb7b
	9c5ee70d85c3e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   42fdb7574da76       storage-provisioner
	cfb41c075cb48       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago      Running             kube-proxy                1                   01674a6515d0f       kube-proxy-pnl5w
	056fb79dac858       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      13 minutes ago      Running             kube-apiserver            1                   1a57d3e3c4369       kube-apiserver-embed-certs-595279
	51b2876b1f3db       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago      Running             kube-scheduler            1                   56df0f463ab2e       kube-scheduler-embed-certs-595279
	b5366e4c2bcda       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      13 minutes ago      Running             kube-controller-manager   1                   2f4260e33a3be       kube-controller-manager-embed-certs-595279
	3047accd150d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   be288742c1e8a       etcd-embed-certs-595279
	
	
	==> coredns [da18d6d5334d9662f0b6045799eb8276b1589a4cdce01aac8f18b05145d94b8e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57157 - 4869 "HINFO IN 2951049221763865448.2846863702263008063. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022951582s
	
	
	==> describe nodes <==
	Name:               embed-certs-595279
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-595279
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=embed-certs-595279
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T21_40_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:40:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-595279
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 22:02:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:59:54 +0000   Tue, 28 May 2024 21:40:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:59:54 +0000   Tue, 28 May 2024 21:40:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:59:54 +0000   Tue, 28 May 2024 21:40:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:59:54 +0000   Tue, 28 May 2024 21:49:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.79
	  Hostname:    embed-certs-595279
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc8c0962a1714691aa4113fd41e50f5c
	  System UUID:                bc8c0962-a171-4691-aa41-13fd41e50f5c
	  Boot ID:                    98dbd1d5-d649-4a15-b07f-84f7ee63e3c4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-8cb7b                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-595279                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-595279             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-595279    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-pnl5w                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-595279             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-569cc877fc-f6fz2               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-595279 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-595279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-595279 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-595279 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-595279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-595279 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node embed-certs-595279 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-595279 event: Registered Node embed-certs-595279 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-595279 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-595279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-595279 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-595279 event: Registered Node embed-certs-595279 in Controller
	
	
	==> dmesg <==
	[May28 21:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050794] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040380] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.483695] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.386823] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.573816] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.236578] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.061068] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063064] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[May28 21:49] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.149483] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.297313] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.367360] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.061369] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.031936] systemd-fstab-generator[926]: Ignoring "noauto" option for root device
	[  +4.674660] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.844754] systemd-fstab-generator[1542]: Ignoring "noauto" option for root device
	[  +4.786337] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.585587] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.070678] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c] <==
	{"level":"info","ts":"2024-05-28T21:49:27.22179Z","caller":"traceutil/trace.go:171","msg":"trace[417555833] linearizableReadLoop","detail":"{readStateIndex:667; appliedIndex:665; }","duration":"833.841372ms","start":"2024-05-28T21:49:26.387941Z","end":"2024-05-28T21:49:27.221783Z","steps":["trace[417555833] 'read index received'  (duration: 184.226664ms)","trace[417555833] 'applied index is now lower than readState.Index'  (duration: 649.614102ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T21:49:27.228281Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"652.087392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-595279\" ","response":"range_response_count:1 size:4475"}
	{"level":"info","ts":"2024-05-28T21:49:27.228644Z","caller":"traceutil/trace.go:171","msg":"trace[1173924614] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-embed-certs-595279; range_end:; response_count:1; response_revision:626; }","duration":"652.453844ms","start":"2024-05-28T21:49:26.576175Z","end":"2024-05-28T21:49:27.228629Z","steps":["trace[1173924614] 'agreement among raft nodes before linearized reading'  (duration: 651.767164ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:49:27.228684Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:49:26.576142Z","time spent":"652.528843ms","remote":"127.0.0.1:44154","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":4499,"request content":"key:\"/registry/pods/kube-system/kube-scheduler-embed-certs-595279\" "}
	{"level":"warn","ts":"2024-05-28T21:49:27.229143Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"841.192038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-8cb7b\" ","response":"range_response_count:1 size:4820"}
	{"level":"info","ts":"2024-05-28T21:49:27.229199Z","caller":"traceutil/trace.go:171","msg":"trace[898731845] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-8cb7b; range_end:; response_count:1; response_revision:626; }","duration":"841.269642ms","start":"2024-05-28T21:49:26.38792Z","end":"2024-05-28T21:49:27.229189Z","steps":["trace[898731845] 'agreement among raft nodes before linearized reading'  (duration: 833.936787ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:49:27.229246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:49:26.387908Z","time spent":"841.330801ms","remote":"127.0.0.1:44154","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4844,"request content":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-8cb7b\" "}
	{"level":"warn","ts":"2024-05-28T21:49:27.229546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.303705ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:49:27.229652Z","caller":"traceutil/trace.go:171","msg":"trace[880333716] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:626; }","duration":"113.428611ms","start":"2024-05-28T21:49:27.116214Z","end":"2024-05-28T21:49:27.229643Z","steps":["trace[880333716] 'agreement among raft nodes before linearized reading'  (duration: 113.306564ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:50:13.67084Z","caller":"traceutil/trace.go:171","msg":"trace[760187028] transaction","detail":"{read_only:false; response_revision:690; number_of_response:1; }","duration":"113.910469ms","start":"2024-05-28T21:50:13.556889Z","end":"2024-05-28T21:50:13.6708Z","steps":["trace[760187028] 'process raft request'  (duration: 113.813482ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.660268Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"537.492048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:58:52.660457Z","caller":"traceutil/trace.go:171","msg":"trace[1767165666] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1117; }","duration":"537.709156ms","start":"2024-05-28T21:58:52.122677Z","end":"2024-05-28T21:58:52.660386Z","steps":["trace[1767165666] 'range keys from in-memory index tree'  (duration: 537.439608ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.660504Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:58:52.122662Z","time spent":"537.82857ms","remote":"127.0.0.1:44154","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":29,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2024-05-28T21:58:52.660294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.467032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-28T21:58:52.660839Z","caller":"traceutil/trace.go:171","msg":"trace[1703068809] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1117; }","duration":"363.051798ms","start":"2024-05-28T21:58:52.297775Z","end":"2024-05-28T21:58:52.660827Z","steps":["trace[1703068809] 'count revisions from in-memory index tree'  (duration: 362.381392ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.660892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:58:52.297759Z","time spent":"363.122321ms","remote":"127.0.0.1:44340","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":2,"response size":31,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true "}
	{"level":"warn","ts":"2024-05-28T21:58:52.660996Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"545.50174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:58:52.661039Z","caller":"traceutil/trace.go:171","msg":"trace[470276632] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1117; }","duration":"545.558691ms","start":"2024-05-28T21:58:52.115472Z","end":"2024-05-28T21:58:52.661031Z","steps":["trace[470276632] 'range keys from in-memory index tree'  (duration: 545.376747ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.661059Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:58:52.115461Z","time spent":"545.592846ms","remote":"127.0.0.1:43940","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-05-28T21:58:52.660935Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"699.266403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-28T21:58:52.661206Z","caller":"traceutil/trace.go:171","msg":"trace[1808651470] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; response_count:0; response_revision:1117; }","duration":"699.556983ms","start":"2024-05-28T21:58:51.961641Z","end":"2024-05-28T21:58:52.661198Z","steps":["trace[1808651470] 'count revisions from in-memory index tree'  (duration: 699.12125ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.661288Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:58:51.961627Z","time spent":"699.628527ms","remote":"127.0.0.1:44476","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":31,"request content":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true "}
	{"level":"info","ts":"2024-05-28T21:59:10.848798Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":891}
	{"level":"info","ts":"2024-05-28T21:59:10.858556Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":891,"took":"9.506359ms","hash":1635858895,"current-db-size-bytes":2760704,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2760704,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-05-28T21:59:10.858653Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1635858895,"revision":891,"compact-revision":-1}
	
	
	==> kernel <==
	 22:02:43 up 13 min,  0 users,  load average: 0.79, 0.36, 0.20
	Linux embed-certs-595279 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622] <==
	Trace[1518601458]: [539.816048ms] [539.816048ms] END
	W0528 21:59:12.152181       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 21:59:12.152328       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0528 21:59:13.152885       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 21:59:13.153002       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 21:59:13.153032       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 21:59:13.152898       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 21:59:13.153138       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 21:59:13.154300       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:00:13.153331       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:00:13.153423       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:00:13.153433       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:00:13.154497       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:00:13.154677       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:00:13.154731       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:02:13.154047       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:02:13.154174       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:02:13.154184       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:02:13.155122       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:02:13.155229       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:02:13.155257       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89] <==
	I0528 21:56:55.930005       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 21:57:25.369906       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 21:57:25.937651       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 21:57:55.375091       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 21:57:55.945807       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 21:58:25.381083       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 21:58:25.953787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 21:58:55.385993       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 21:58:55.960971       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 21:59:25.391336       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 21:59:25.972129       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 21:59:55.396977       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 21:59:55.979899       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0528 22:00:15.248853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="363.365µs"
	E0528 22:00:25.402485       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:00:25.987359       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0528 22:00:27.253247       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="144.543µs"
	E0528 22:00:55.407473       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:00:55.995661       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:01:25.412678       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:01:26.002841       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:01:55.419056       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:01:56.012178       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:02:25.424816       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:02:26.019977       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc] <==
	I0528 21:49:12.898688       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:49:12.914685       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.79"]
	I0528 21:49:12.946940       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 21:49:12.946990       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 21:49:12.947043       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:49:12.949651       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:49:12.949883       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:49:12.949912       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:49:12.952726       1 config.go:192] "Starting service config controller"
	I0528 21:49:12.952762       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:49:12.952787       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:49:12.952792       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:49:12.960289       1 config.go:319] "Starting node config controller"
	I0528 21:49:12.960316       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:49:13.053906       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 21:49:13.053979       1 shared_informer.go:320] Caches are synced for service config
	I0528 21:49:13.060425       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [51b2876b1f3db864aa0e6c3000bdfb694e988ccb6c3ffdc56053e0547989c5e5] <==
	I0528 21:49:09.782373       1 serving.go:380] Generated self-signed cert in-memory
	W0528 21:49:12.083203       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 21:49:12.083343       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:49:12.083417       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 21:49:12.083461       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 21:49:12.211369       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 21:49:12.217851       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:49:12.227548       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0528 21:49:12.227752       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 21:49:12.228311       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0528 21:49:12.230772       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 21:49:12.328766       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 22:00:08 embed-certs-595279 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:00:08 embed-certs-595279 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:00:15 embed-certs-595279 kubelet[933]: E0528 22:00:15.230909     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:00:27 embed-certs-595279 kubelet[933]: E0528 22:00:27.231111     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:00:38 embed-certs-595279 kubelet[933]: E0528 22:00:38.232134     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:00:51 embed-certs-595279 kubelet[933]: E0528 22:00:51.231118     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:01:06 embed-certs-595279 kubelet[933]: E0528 22:01:06.231447     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:01:08 embed-certs-595279 kubelet[933]: E0528 22:01:08.259226     933 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:01:08 embed-certs-595279 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:01:08 embed-certs-595279 kubelet[933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:01:08 embed-certs-595279 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:01:08 embed-certs-595279 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:01:17 embed-certs-595279 kubelet[933]: E0528 22:01:17.232365     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:01:28 embed-certs-595279 kubelet[933]: E0528 22:01:28.230723     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:01:40 embed-certs-595279 kubelet[933]: E0528 22:01:40.235001     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:01:53 embed-certs-595279 kubelet[933]: E0528 22:01:53.231039     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:02:04 embed-certs-595279 kubelet[933]: E0528 22:02:04.233222     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:02:08 embed-certs-595279 kubelet[933]: E0528 22:02:08.262481     933 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:02:08 embed-certs-595279 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:02:08 embed-certs-595279 kubelet[933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:02:08 embed-certs-595279 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:02:08 embed-certs-595279 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:02:16 embed-certs-595279 kubelet[933]: E0528 22:02:16.231205     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:02:27 embed-certs-595279 kubelet[933]: E0528 22:02:27.230854     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:02:41 embed-certs-595279 kubelet[933]: E0528 22:02:41.231475     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	
	
	==> storage-provisioner [9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d] <==
	I0528 21:49:12.873498       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0528 21:49:42.877892       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c6334a28f9d29a95b433857aede8e5afb904c6d2e764f1afa093ca5e7c09de09] <==
	I0528 21:49:43.516737       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 21:49:43.533120       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 21:49:43.533405       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 21:50:00.935356       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 21:50:00.936273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"efd8c73c-2b15-4c73-812f-ad9c2ba03fd4", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-595279_e605c4d9-51a6-4a1f-8323-20929da1efa1 became leader
	I0528 21:50:00.936496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-595279_e605c4d9-51a6-4a1f-8323-20929da1efa1!
	I0528 21:50:01.041906       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-595279_e605c4d9-51a6-4a1f-8323-20929da1efa1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-595279 -n embed-certs-595279
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-595279 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-f6fz2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-595279 describe pod metrics-server-569cc877fc-f6fz2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-595279 describe pod metrics-server-569cc877fc-f6fz2: exit status 1 (60.72605ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-f6fz2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-595279 describe pod metrics-server-569cc877fc-f6fz2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0528 21:55:55.686294   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:56:08.806245   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory
E0528 21:56:36.131792   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:56:55.337950   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-290122 -n no-preload-290122
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-05-28 22:03:55.57989852 +0000 UTC m=+6166.335977133
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-290122 -n no-preload-290122
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-290122 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-290122 logs -n 25: (1.383984618s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-290122             | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-595279            | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-499466        | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-290122                  | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-595279                 | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-499466             | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-257793                              | cert-expiration-257793       | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807140 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	|         | disable-driver-mounts-807140                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:50 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-249165  | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC | 28 May 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC |                     |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-249165       | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC |                     |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:53:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:53:40.744358   73188 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:53:40.744653   73188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:53:40.744664   73188 out.go:304] Setting ErrFile to fd 2...
	I0528 21:53:40.744668   73188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:53:40.744923   73188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:53:40.745490   73188 out.go:298] Setting JSON to false
	I0528 21:53:40.746663   73188 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5764,"bootTime":1716927457,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:53:40.746723   73188 start.go:139] virtualization: kvm guest
	I0528 21:53:40.749013   73188 out.go:177] * [default-k8s-diff-port-249165] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:53:40.750611   73188 notify.go:220] Checking for updates...
	I0528 21:53:40.750618   73188 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:53:40.752116   73188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:53:40.753384   73188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:53:40.754612   73188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:53:40.755846   73188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:53:40.756972   73188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:53:40.758627   73188 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:53:40.759050   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.759106   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.774337   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I0528 21:53:40.774754   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.775318   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.775344   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.775633   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.775791   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.776007   73188 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:53:40.776327   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.776382   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.790531   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I0528 21:53:40.790970   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.791471   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.791498   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.791802   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.791983   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.826633   73188 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:53:40.827847   73188 start.go:297] selected driver: kvm2
	I0528 21:53:40.827863   73188 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:53:40.827981   73188 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:53:40.828705   73188 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:53:40.828777   73188 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:53:40.844223   73188 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:53:40.844574   73188 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:53:40.844638   73188 cni.go:84] Creating CNI manager for ""
	I0528 21:53:40.844650   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:53:40.844682   73188 start.go:340] cluster config:
	{Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:53:40.844775   73188 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:53:40.846544   73188 out.go:177] * Starting "default-k8s-diff-port-249165" primary control-plane node in "default-k8s-diff-port-249165" cluster
	I0528 21:53:40.847754   73188 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:53:40.847792   73188 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 21:53:40.847801   73188 cache.go:56] Caching tarball of preloaded images
	I0528 21:53:40.847870   73188 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:53:40.847880   73188 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 21:53:40.847964   73188 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/config.json ...
	I0528 21:53:40.848196   73188 start.go:360] acquireMachinesLock for default-k8s-diff-port-249165: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:53:40.848256   73188 start.go:364] duration metric: took 38.994µs to acquireMachinesLock for "default-k8s-diff-port-249165"
	I0528 21:53:40.848271   73188 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:53:40.848281   73188 fix.go:54] fixHost starting: 
	I0528 21:53:40.848534   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.848571   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.863227   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0528 21:53:40.863708   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.864162   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.864182   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.864616   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.864794   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.864952   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 21:53:40.866583   73188 fix.go:112] recreateIfNeeded on default-k8s-diff-port-249165: state=Running err=<nil>
	W0528 21:53:40.866600   73188 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:53:40.868382   73188 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-249165" VM ...
	I0528 21:53:38.450836   70002 logs.go:123] Gathering logs for storage-provisioner [9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d] ...
	I0528 21:53:38.450866   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d"
	I0528 21:53:38.485575   70002 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:38.485610   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:38.854290   70002 logs.go:123] Gathering logs for container status ...
	I0528 21:53:38.854325   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:38.902357   70002 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:38.902389   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:38.916785   70002 logs.go:123] Gathering logs for etcd [3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c] ...
	I0528 21:53:38.916820   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c"
	I0528 21:53:38.982119   70002 logs.go:123] Gathering logs for kube-apiserver [056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622] ...
	I0528 21:53:38.982148   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622"
	I0528 21:53:39.031038   70002 logs.go:123] Gathering logs for kube-proxy [cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc] ...
	I0528 21:53:39.031066   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc"
	I0528 21:53:39.068094   70002 logs.go:123] Gathering logs for kube-controller-manager [b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89] ...
	I0528 21:53:39.068123   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89"
	I0528 21:53:39.129214   70002 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:39.129248   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:39.191483   70002 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:39.191523   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:53:41.813698   70002 system_pods.go:59] 8 kube-system pods found
	I0528 21:53:41.813725   70002 system_pods.go:61] "coredns-7db6d8ff4d-8cb7b" [b3908d89-cfc6-4f1a-9aef-861aac0d3e29] Running
	I0528 21:53:41.813730   70002 system_pods.go:61] "etcd-embed-certs-595279" [58581274-a239-4367-926e-c333f201d4f8] Running
	I0528 21:53:41.813733   70002 system_pods.go:61] "kube-apiserver-embed-certs-595279" [cc2dd164-709f-4e59-81bc-ce9d30bbced9] Running
	I0528 21:53:41.813736   70002 system_pods.go:61] "kube-controller-manager-embed-certs-595279" [e049af67-fff8-466f-96ff-81d148602884] Running
	I0528 21:53:41.813739   70002 system_pods.go:61] "kube-proxy-pnl5w" [9c2c68bc-42c2-425e-ae35-a8c07b5d5221] Running
	I0528 21:53:41.813742   70002 system_pods.go:61] "kube-scheduler-embed-certs-595279" [ab6bff2a-7266-4c5c-96bc-c87ef15dd342] Running
	I0528 21:53:41.813748   70002 system_pods.go:61] "metrics-server-569cc877fc-f6fz2" [b5e432cd-3b95-4f20-b9b3-c498512a7564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:53:41.813751   70002 system_pods.go:61] "storage-provisioner" [7bf52279-1fbc-40e5-8376-992c545c55dd] Running
	I0528 21:53:41.813771   70002 system_pods.go:74] duration metric: took 3.894565784s to wait for pod list to return data ...
	I0528 21:53:41.813780   70002 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:53:41.816297   70002 default_sa.go:45] found service account: "default"
	I0528 21:53:41.816319   70002 default_sa.go:55] duration metric: took 2.532587ms for default service account to be created ...
	I0528 21:53:41.816326   70002 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:53:41.821407   70002 system_pods.go:86] 8 kube-system pods found
	I0528 21:53:41.821437   70002 system_pods.go:89] "coredns-7db6d8ff4d-8cb7b" [b3908d89-cfc6-4f1a-9aef-861aac0d3e29] Running
	I0528 21:53:41.821447   70002 system_pods.go:89] "etcd-embed-certs-595279" [58581274-a239-4367-926e-c333f201d4f8] Running
	I0528 21:53:41.821453   70002 system_pods.go:89] "kube-apiserver-embed-certs-595279" [cc2dd164-709f-4e59-81bc-ce9d30bbced9] Running
	I0528 21:53:41.821458   70002 system_pods.go:89] "kube-controller-manager-embed-certs-595279" [e049af67-fff8-466f-96ff-81d148602884] Running
	I0528 21:53:41.821461   70002 system_pods.go:89] "kube-proxy-pnl5w" [9c2c68bc-42c2-425e-ae35-a8c07b5d5221] Running
	I0528 21:53:41.821465   70002 system_pods.go:89] "kube-scheduler-embed-certs-595279" [ab6bff2a-7266-4c5c-96bc-c87ef15dd342] Running
	I0528 21:53:41.821472   70002 system_pods.go:89] "metrics-server-569cc877fc-f6fz2" [b5e432cd-3b95-4f20-b9b3-c498512a7564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:53:41.821480   70002 system_pods.go:89] "storage-provisioner" [7bf52279-1fbc-40e5-8376-992c545c55dd] Running
	I0528 21:53:41.821489   70002 system_pods.go:126] duration metric: took 5.157831ms to wait for k8s-apps to be running ...
	I0528 21:53:41.821498   70002 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:53:41.821538   70002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:53:41.838819   70002 system_svc.go:56] duration metric: took 17.315204ms WaitForService to wait for kubelet
	I0528 21:53:41.838844   70002 kubeadm.go:576] duration metric: took 4m26.419891509s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:53:41.838864   70002 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:53:41.841408   70002 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:53:41.841424   70002 node_conditions.go:123] node cpu capacity is 2
	I0528 21:53:41.841433   70002 node_conditions.go:105] duration metric: took 2.56566ms to run NodePressure ...
	I0528 21:53:41.841445   70002 start.go:240] waiting for startup goroutines ...
	I0528 21:53:41.841452   70002 start.go:245] waiting for cluster config update ...
	I0528 21:53:41.841463   70002 start.go:254] writing updated cluster config ...
	I0528 21:53:41.841709   70002 ssh_runner.go:195] Run: rm -f paused
	I0528 21:53:41.886820   70002 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:53:41.888710   70002 out.go:177] * Done! kubectl is now configured to use "embed-certs-595279" cluster and "default" namespace by default
	I0528 21:53:40.749506   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:43.248909   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:40.869524   73188 machine.go:94] provisionDockerMachine start ...
	I0528 21:53:40.869542   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.869730   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:53:40.872099   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:53:40.872470   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:53:40.872491   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:53:40.872625   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:53:40.872772   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:53:40.872952   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:53:40.873092   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:53:40.873253   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:53:40.873429   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:53:40.873438   73188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:53:43.770029   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:45.748750   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:48.248904   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:46.841982   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:50.249442   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:52.749680   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:52.922023   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:55.251148   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:57.748960   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:55.994071   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:59.749114   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:02.248306   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:02.074025   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:05.145996   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:04.248616   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:06.749284   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:09.247806   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:11.249481   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:13.748196   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:12.825536   70393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:54:12.825810   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:12.826159   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:14.266167   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:15.749468   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:18.248675   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:17.826706   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:17.826945   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:17.338025   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:20.248941   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:22.749284   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:23.417971   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:25.248681   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:27.748556   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:27.827370   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:27.827610   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:26.490049   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:29.748865   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:32.248746   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:32.569987   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:35.641969   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:34.249483   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:36.748835   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:38.749264   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:41.251039   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:43.248816   69886 pod_ready.go:81] duration metric: took 4m0.006582939s for pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace to be "Ready" ...
	E0528 21:54:43.248839   69886 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0528 21:54:43.248847   69886 pod_ready.go:38] duration metric: took 4m4.041932949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:54:43.248863   69886 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:54:43.248889   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:43.248933   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:43.296609   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:43.296630   69886 cri.go:89] found id: ""
	I0528 21:54:43.296638   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:43.296694   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.301171   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:43.301211   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:43.340772   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:43.340793   69886 cri.go:89] found id: ""
	I0528 21:54:43.340799   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:43.340843   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.345422   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:43.345489   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:43.392432   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:43.392458   69886 cri.go:89] found id: ""
	I0528 21:54:43.392467   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:43.392521   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.396870   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:43.396943   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:43.433491   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:43.433516   69886 cri.go:89] found id: ""
	I0528 21:54:43.433525   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:43.433584   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.438209   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:43.438276   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:43.479257   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:43.479299   69886 cri.go:89] found id: ""
	I0528 21:54:43.479309   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:43.479425   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.484063   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:43.484127   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:43.523360   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:43.523384   69886 cri.go:89] found id: ""
	I0528 21:54:43.523394   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:43.523443   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.527859   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:43.527915   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:43.565610   69886 cri.go:89] found id: ""
	I0528 21:54:43.565631   69886 logs.go:276] 0 containers: []
	W0528 21:54:43.565638   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:43.565643   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:43.565687   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:43.603133   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:43.603155   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:43.603159   69886 cri.go:89] found id: ""
	I0528 21:54:43.603166   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:43.603233   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.607421   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.611570   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:43.611593   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:43.656455   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:43.656483   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:43.708385   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:43.708416   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:43.766267   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:43.766300   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:43.813734   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:43.813782   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:43.857289   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:43.857317   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:43.897976   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:43.898001   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:41.721973   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:44.798063   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:44.394070   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:44.394112   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:44.450041   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:44.450078   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:44.464067   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:44.464092   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:44.588402   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:44.588432   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:44.631477   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:44.631505   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:44.676531   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:44.676562   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:47.229026   69886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:54:47.247014   69886 api_server.go:72] duration metric: took 4m15.746572678s to wait for apiserver process to appear ...
	I0528 21:54:47.247043   69886 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:54:47.247085   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:47.247153   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:47.291560   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:47.291592   69886 cri.go:89] found id: ""
	I0528 21:54:47.291602   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:47.291667   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.296538   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:47.296597   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:47.335786   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:47.335809   69886 cri.go:89] found id: ""
	I0528 21:54:47.335817   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:47.335861   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.340222   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:47.340295   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:47.376487   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:47.376518   69886 cri.go:89] found id: ""
	I0528 21:54:47.376528   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:47.376587   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.380986   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:47.381043   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:47.419121   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:47.419144   69886 cri.go:89] found id: ""
	I0528 21:54:47.419151   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:47.419194   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.423323   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:47.423378   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:47.460781   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:47.460806   69886 cri.go:89] found id: ""
	I0528 21:54:47.460813   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:47.460856   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.465054   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:47.465107   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:47.510054   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:47.510077   69886 cri.go:89] found id: ""
	I0528 21:54:47.510085   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:47.510136   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.514707   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:47.514764   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:47.551564   69886 cri.go:89] found id: ""
	I0528 21:54:47.551587   69886 logs.go:276] 0 containers: []
	W0528 21:54:47.551594   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:47.551600   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:47.551647   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:47.591484   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:47.591506   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:47.591511   69886 cri.go:89] found id: ""
	I0528 21:54:47.591520   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:47.591581   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.596620   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.600861   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:47.600884   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:48.031181   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:48.031218   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:48.085321   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:48.085354   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:48.135504   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:48.135538   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:48.172440   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:48.172474   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:48.210817   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:48.210849   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:48.248170   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:48.248196   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:48.290905   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:48.290933   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:48.344302   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:48.344333   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:48.363912   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:48.363940   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:48.490794   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:48.490836   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:48.538412   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:48.538443   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:48.574693   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:48.574724   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:47.828383   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:47.828686   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:51.128492   69886 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0528 21:54:51.132736   69886 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0528 21:54:51.133908   69886 api_server.go:141] control plane version: v1.30.1
	I0528 21:54:51.133927   69886 api_server.go:131] duration metric: took 3.886877047s to wait for apiserver health ...
	I0528 21:54:51.133935   69886 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:54:51.133953   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:51.134009   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:51.174021   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:51.174042   69886 cri.go:89] found id: ""
	I0528 21:54:51.174049   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:51.174100   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.179416   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:51.179487   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:51.218954   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:51.218981   69886 cri.go:89] found id: ""
	I0528 21:54:51.218992   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:51.219055   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.224849   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:51.224920   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:51.265274   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:51.265304   69886 cri.go:89] found id: ""
	I0528 21:54:51.265314   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:51.265388   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.270027   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:51.270104   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:51.316234   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:51.316259   69886 cri.go:89] found id: ""
	I0528 21:54:51.316269   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:51.316324   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.320705   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:51.320771   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:51.358054   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:51.358079   69886 cri.go:89] found id: ""
	I0528 21:54:51.358089   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:51.358136   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.363687   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:51.363753   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:51.409441   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:51.409462   69886 cri.go:89] found id: ""
	I0528 21:54:51.409470   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:51.409517   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.414069   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:51.414125   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:51.454212   69886 cri.go:89] found id: ""
	I0528 21:54:51.454245   69886 logs.go:276] 0 containers: []
	W0528 21:54:51.454255   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:51.454263   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:51.454324   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:51.492146   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:51.492174   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:51.492181   69886 cri.go:89] found id: ""
	I0528 21:54:51.492190   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:51.492262   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.497116   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.501448   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:51.501469   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:51.871114   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:51.871151   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:51.918562   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:51.918590   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:52.031780   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:52.031819   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:52.090798   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:52.090827   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:52.131645   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:52.131673   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:52.191137   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:52.191172   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:52.241028   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:52.241054   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:52.276075   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:52.276115   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:52.328268   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:52.328307   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:52.342509   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:52.342542   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:52.390934   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:52.390980   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:52.429778   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:52.429809   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:54.975461   69886 system_pods.go:59] 8 kube-system pods found
	I0528 21:54:54.975495   69886 system_pods.go:61] "coredns-7db6d8ff4d-fmk2h" [a084dfb5-5818-4244-9052-a9f861b45617] Running
	I0528 21:54:54.975502   69886 system_pods.go:61] "etcd-no-preload-290122" [85b87ff0-50e4-4b23-b4dd-8442068b1af3] Running
	I0528 21:54:54.975508   69886 system_pods.go:61] "kube-apiserver-no-preload-290122" [2e2cecbc-1755-4cdc-8963-ab1e5b73e9f1] Running
	I0528 21:54:54.975514   69886 system_pods.go:61] "kube-controller-manager-no-preload-290122" [9a6218b4-fbd3-46ff-8eb5-a19f5ed9db72] Running
	I0528 21:54:54.975519   69886 system_pods.go:61] "kube-proxy-w45qh" [f962c73d-872d-4f78-a628-267cb0be49bb] Running
	I0528 21:54:54.975524   69886 system_pods.go:61] "kube-scheduler-no-preload-290122" [d191845d-d1ff-45d8-8601-b0395e2752f1] Running
	I0528 21:54:54.975532   69886 system_pods.go:61] "metrics-server-569cc877fc-j2khc" [2254e89c-3a61-4523-99a2-27ec92e73c9a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:54:54.975540   69886 system_pods.go:61] "storage-provisioner" [fc1a5463-05e0-4213-a7a8-2dd7f355ac36] Running
	I0528 21:54:54.975549   69886 system_pods.go:74] duration metric: took 3.841608486s to wait for pod list to return data ...
	I0528 21:54:54.975564   69886 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:54:54.977757   69886 default_sa.go:45] found service account: "default"
	I0528 21:54:54.977794   69886 default_sa.go:55] duration metric: took 2.222664ms for default service account to be created ...
	I0528 21:54:54.977803   69886 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:54:54.982505   69886 system_pods.go:86] 8 kube-system pods found
	I0528 21:54:54.982527   69886 system_pods.go:89] "coredns-7db6d8ff4d-fmk2h" [a084dfb5-5818-4244-9052-a9f861b45617] Running
	I0528 21:54:54.982532   69886 system_pods.go:89] "etcd-no-preload-290122" [85b87ff0-50e4-4b23-b4dd-8442068b1af3] Running
	I0528 21:54:54.982537   69886 system_pods.go:89] "kube-apiserver-no-preload-290122" [2e2cecbc-1755-4cdc-8963-ab1e5b73e9f1] Running
	I0528 21:54:54.982541   69886 system_pods.go:89] "kube-controller-manager-no-preload-290122" [9a6218b4-fbd3-46ff-8eb5-a19f5ed9db72] Running
	I0528 21:54:54.982545   69886 system_pods.go:89] "kube-proxy-w45qh" [f962c73d-872d-4f78-a628-267cb0be49bb] Running
	I0528 21:54:54.982549   69886 system_pods.go:89] "kube-scheduler-no-preload-290122" [d191845d-d1ff-45d8-8601-b0395e2752f1] Running
	I0528 21:54:54.982554   69886 system_pods.go:89] "metrics-server-569cc877fc-j2khc" [2254e89c-3a61-4523-99a2-27ec92e73c9a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:54:54.982559   69886 system_pods.go:89] "storage-provisioner" [fc1a5463-05e0-4213-a7a8-2dd7f355ac36] Running
	I0528 21:54:54.982565   69886 system_pods.go:126] duration metric: took 4.757682ms to wait for k8s-apps to be running ...
	I0528 21:54:54.982571   69886 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:54:54.982611   69886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:54:54.998318   69886 system_svc.go:56] duration metric: took 15.73926ms WaitForService to wait for kubelet
	I0528 21:54:54.998344   69886 kubeadm.go:576] duration metric: took 4m23.497907193s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:54:54.998364   69886 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:54:55.000709   69886 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:54:55.000726   69886 node_conditions.go:123] node cpu capacity is 2
	I0528 21:54:55.000737   69886 node_conditions.go:105] duration metric: took 2.368195ms to run NodePressure ...
	I0528 21:54:55.000747   69886 start.go:240] waiting for startup goroutines ...
	I0528 21:54:55.000754   69886 start.go:245] waiting for cluster config update ...
	I0528 21:54:55.000767   69886 start.go:254] writing updated cluster config ...
	I0528 21:54:55.001043   69886 ssh_runner.go:195] Run: rm -f paused
	I0528 21:54:55.049907   69886 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:54:55.051941   69886 out.go:177] * Done! kubectl is now configured to use "no-preload-290122" cluster and "default" namespace by default
	I0528 21:54:50.874003   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:53.946104   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:00.029992   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:03.098014   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:09.177976   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:12.250035   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:18.330105   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:21.402027   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:27.830110   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:55:27.830377   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:55:27.830409   70393 kubeadm.go:309] 
	I0528 21:55:27.830460   70393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:55:27.830496   70393 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:55:27.830504   70393 kubeadm.go:309] 
	I0528 21:55:27.830563   70393 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:55:27.830629   70393 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:55:27.830806   70393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:55:27.830833   70393 kubeadm.go:309] 
	I0528 21:55:27.830939   70393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:55:27.830970   70393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:55:27.830999   70393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:55:27.831006   70393 kubeadm.go:309] 
	I0528 21:55:27.831089   70393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:55:27.831161   70393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:55:27.831168   70393 kubeadm.go:309] 
	I0528 21:55:27.831276   70393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:55:27.831396   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:55:27.831491   70393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:55:27.831586   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:55:27.831597   70393 kubeadm.go:309] 
	I0528 21:55:27.832385   70393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:55:27.832478   70393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:55:27.832569   70393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0528 21:55:27.832707   70393 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0528 21:55:27.832768   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0528 21:55:28.286592   70393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:55:28.301095   70393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:55:28.310856   70393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:55:28.310875   70393 kubeadm.go:156] found existing configuration files:
	
	I0528 21:55:28.310916   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:55:28.319713   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:55:28.319757   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:55:28.328964   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:55:28.337404   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:55:28.337456   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:55:28.346480   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:55:28.355427   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:55:28.355475   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:55:28.364843   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:55:28.373821   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:55:28.373874   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:55:28.382542   70393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 21:55:28.448539   70393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0528 21:55:28.448744   70393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 21:55:28.592911   70393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 21:55:28.593029   70393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 21:55:28.593137   70393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 21:55:28.793805   70393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 21:55:28.795709   70393 out.go:204]   - Generating certificates and keys ...
	I0528 21:55:28.795786   70393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 21:55:28.795854   70393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 21:55:28.795959   70393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 21:55:28.796055   70393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0528 21:55:28.796153   70393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0528 21:55:28.796349   70393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0528 21:55:28.796467   70393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0528 21:55:28.796537   70393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0528 21:55:28.796610   70393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 21:55:28.796721   70393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 21:55:28.796768   70393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0528 21:55:28.796847   70393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 21:55:28.946885   70393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 21:55:29.128640   70393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 21:55:29.240490   70393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 21:55:29.542128   70393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 21:55:29.563784   70393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 21:55:29.565927   70393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 21:55:29.566159   70393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 21:55:29.711517   70393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 21:55:27.482003   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:30.554006   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:29.713311   70393 out.go:204]   - Booting up control plane ...
	I0528 21:55:29.713420   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 21:55:29.717970   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 21:55:29.718779   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 21:55:29.719429   70393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 21:55:29.722781   70393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0528 21:55:36.633958   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:39.710041   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:45.785968   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:48.861975   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:54.938007   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:58.014038   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:04.094039   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:07.162043   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:09.724902   70393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:56:09.725334   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:09.725557   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:13.241997   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:14.726408   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:14.726667   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:16.314032   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:22.394150   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:25.465982   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:24.727314   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:24.727592   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:31.546004   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:34.617980   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:40.697993   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:43.770044   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:44.728635   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:44.728954   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:49.853977   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:52.922083   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:59.001998   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:02.073983   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:08.157974   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:11.226001   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:17.305964   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:20.377963   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:24.729385   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:57:24.729659   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:57:24.729688   70393 kubeadm.go:309] 
	I0528 21:57:24.729745   70393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:57:24.729835   70393 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:57:24.729856   70393 kubeadm.go:309] 
	I0528 21:57:24.729898   70393 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:57:24.729930   70393 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:57:24.730023   70393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:57:24.730030   70393 kubeadm.go:309] 
	I0528 21:57:24.730156   70393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:57:24.730212   70393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:57:24.730267   70393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:57:24.730278   70393 kubeadm.go:309] 
	I0528 21:57:24.730403   70393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:57:24.730522   70393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:57:24.730533   70393 kubeadm.go:309] 
	I0528 21:57:24.730669   70393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:57:24.730788   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:57:24.730899   70393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:57:24.731020   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:57:24.731039   70393 kubeadm.go:309] 
	I0528 21:57:24.731657   70393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:57:24.731752   70393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:57:24.731861   70393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0528 21:57:24.731942   70393 kubeadm.go:393] duration metric: took 7m57.905523124s to StartCluster
	I0528 21:57:24.731997   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:57:24.732064   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:57:24.772889   70393 cri.go:89] found id: ""
	I0528 21:57:24.772916   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.772923   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:57:24.772929   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:57:24.772988   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:57:24.806418   70393 cri.go:89] found id: ""
	I0528 21:57:24.806447   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.806458   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:57:24.806467   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:57:24.806534   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:57:24.844994   70393 cri.go:89] found id: ""
	I0528 21:57:24.845020   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.845028   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:57:24.845035   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:57:24.845098   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:57:24.880517   70393 cri.go:89] found id: ""
	I0528 21:57:24.880547   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.880558   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:57:24.880566   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:57:24.880615   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:57:24.917534   70393 cri.go:89] found id: ""
	I0528 21:57:24.917561   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.917569   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:57:24.917575   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:57:24.917624   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:57:24.952898   70393 cri.go:89] found id: ""
	I0528 21:57:24.952929   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.952940   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:57:24.952948   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:57:24.953011   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:57:24.994957   70393 cri.go:89] found id: ""
	I0528 21:57:24.994983   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.994990   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:57:24.994996   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:57:24.995046   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:57:25.032594   70393 cri.go:89] found id: ""
	I0528 21:57:25.032617   70393 logs.go:276] 0 containers: []
	W0528 21:57:25.032624   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:57:25.032633   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:57:25.032645   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:57:25.112858   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:57:25.112882   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:57:25.112894   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:57:25.217748   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:57:25.217792   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:57:25.289998   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:57:25.290035   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:57:25.344833   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:57:25.344868   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0528 21:57:25.360547   70393 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0528 21:57:25.360594   70393 out.go:239] * 
	W0528 21:57:25.360659   70393 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:57:25.360693   70393 out.go:239] * 
	W0528 21:57:25.361545   70393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 21:57:25.365387   70393 out.go:177] 
	W0528 21:57:25.366681   70393 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:57:25.366731   70393 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0528 21:57:25.366772   70393 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0528 21:57:25.369011   70393 out.go:177] 
	I0528 21:57:26.462093   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:29.530040   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:35.610027   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:38.682076   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:44.762057   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:47.838109   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:53.914000   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:56.986078   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:03.066042   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:06.138002   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:12.218031   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:15.290043   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:18.290952   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:58:18.291006   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:18.291338   73188 buildroot.go:166] provisioning hostname "default-k8s-diff-port-249165"
	I0528 21:58:18.291363   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:18.291646   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:18.293181   73188 machine.go:97] duration metric: took 4m37.423637232s to provisionDockerMachine
	I0528 21:58:18.293224   73188 fix.go:56] duration metric: took 4m37.444947597s for fixHost
	I0528 21:58:18.293230   73188 start.go:83] releasing machines lock for "default-k8s-diff-port-249165", held for 4m37.444964638s
	W0528 21:58:18.293245   73188 start.go:713] error starting host: provision: host is not running
	W0528 21:58:18.293337   73188 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0528 21:58:18.293346   73188 start.go:728] Will try again in 5 seconds ...
	I0528 21:58:23.295554   73188 start.go:360] acquireMachinesLock for default-k8s-diff-port-249165: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:58:23.295664   73188 start.go:364] duration metric: took 68.737µs to acquireMachinesLock for "default-k8s-diff-port-249165"
	I0528 21:58:23.295686   73188 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:58:23.295692   73188 fix.go:54] fixHost starting: 
	I0528 21:58:23.296036   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:58:23.296059   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:58:23.310971   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0528 21:58:23.311354   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:58:23.311769   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:58:23.311791   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:58:23.312072   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:58:23.312279   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:23.312406   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 21:58:23.313815   73188 fix.go:112] recreateIfNeeded on default-k8s-diff-port-249165: state=Stopped err=<nil>
	I0528 21:58:23.313837   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	W0528 21:58:23.313981   73188 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:58:23.315867   73188 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-249165" ...
	I0528 21:58:23.317068   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Start
	I0528 21:58:23.317224   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Ensuring networks are active...
	I0528 21:58:23.317939   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Ensuring network default is active
	I0528 21:58:23.318317   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Ensuring network mk-default-k8s-diff-port-249165 is active
	I0528 21:58:23.318787   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Getting domain xml...
	I0528 21:58:23.319512   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Creating domain...
	I0528 21:58:24.556897   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting to get IP...
	I0528 21:58:24.557688   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.558217   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.558288   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:24.558188   74350 retry.go:31] will retry after 274.96624ms: waiting for machine to come up
	I0528 21:58:24.834950   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.835591   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.835621   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:24.835547   74350 retry.go:31] will retry after 271.693151ms: waiting for machine to come up
	I0528 21:58:25.109193   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.109736   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.109782   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:25.109675   74350 retry.go:31] will retry after 381.434148ms: waiting for machine to come up
	I0528 21:58:25.493383   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.493853   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.493880   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:25.493784   74350 retry.go:31] will retry after 384.034489ms: waiting for machine to come up
	I0528 21:58:25.879289   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.879822   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.879854   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:25.879749   74350 retry.go:31] will retry after 517.483073ms: waiting for machine to come up
	I0528 21:58:26.398450   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:26.399012   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:26.399089   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:26.399010   74350 retry.go:31] will retry after 757.371702ms: waiting for machine to come up
	I0528 21:58:27.157490   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:27.158014   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:27.158044   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:27.157971   74350 retry.go:31] will retry after 1.042611523s: waiting for machine to come up
	I0528 21:58:28.201704   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:28.202196   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:28.202229   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:28.202140   74350 retry.go:31] will retry after 1.287212665s: waiting for machine to come up
	I0528 21:58:29.490908   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:29.491356   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:29.491386   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:29.491287   74350 retry.go:31] will retry after 1.576442022s: waiting for machine to come up
	I0528 21:58:31.069493   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:31.069966   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:31.069995   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:31.069917   74350 retry.go:31] will retry after 2.245383669s: waiting for machine to come up
	I0528 21:58:33.317217   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:33.317670   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:33.317701   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:33.317608   74350 retry.go:31] will retry after 2.415705908s: waiting for machine to come up
	I0528 21:58:35.736148   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:35.736526   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:35.736549   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:35.736486   74350 retry.go:31] will retry after 3.463330934s: waiting for machine to come up
	I0528 21:58:39.201369   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:39.201852   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:39.201885   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:39.201819   74350 retry.go:31] will retry after 4.496481714s: waiting for machine to come up
	I0528 21:58:43.699313   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.699760   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Found IP for machine: 192.168.72.48
	I0528 21:58:43.699783   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Reserving static IP address...
	I0528 21:58:43.699801   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has current primary IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.700262   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Reserved static IP address: 192.168.72.48
	I0528 21:58:43.700280   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for SSH to be available...
	I0528 21:58:43.700295   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-249165", mac: "52:54:00:f4:fc:a4", ip: "192.168.72.48"} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.700339   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | skip adding static IP to network mk-default-k8s-diff-port-249165 - found existing host DHCP lease matching {name: "default-k8s-diff-port-249165", mac: "52:54:00:f4:fc:a4", ip: "192.168.72.48"}
	I0528 21:58:43.700362   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Getting to WaitForSSH function...
	I0528 21:58:43.702496   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.702910   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.702941   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.703104   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Using SSH client type: external
	I0528 21:58:43.703126   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa (-rw-------)
	I0528 21:58:43.703169   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 21:58:43.703185   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | About to run SSH command:
	I0528 21:58:43.703211   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | exit 0
	I0528 21:58:43.825921   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | SSH cmd err, output: <nil>: 
	I0528 21:58:43.826314   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetConfigRaw
	I0528 21:58:43.826989   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:43.829337   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.829663   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.829685   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.829993   73188 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/config.json ...
	I0528 21:58:43.830227   73188 machine.go:94] provisionDockerMachine start ...
	I0528 21:58:43.830259   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:43.830499   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:43.832840   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.833193   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.833222   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.833382   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:43.833551   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.833687   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.833820   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:43.833977   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:43.834147   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:43.834156   73188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:58:43.938159   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 21:58:43.938191   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:43.938426   73188 buildroot.go:166] provisioning hostname "default-k8s-diff-port-249165"
	I0528 21:58:43.938472   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:43.938684   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:43.941594   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.941986   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.942016   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.942195   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:43.942393   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.942550   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.942742   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:43.942913   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:43.943069   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:43.943082   73188 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-249165 && echo "default-k8s-diff-port-249165" | sudo tee /etc/hostname
	I0528 21:58:44.060923   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-249165
	
	I0528 21:58:44.060955   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.063621   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.063974   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.064008   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.064132   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.064326   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.064508   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.064660   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.064818   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:44.064999   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:44.065016   73188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-249165' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-249165/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-249165' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:58:44.174464   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:58:44.174491   73188 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:58:44.174524   73188 buildroot.go:174] setting up certificates
	I0528 21:58:44.174538   73188 provision.go:84] configureAuth start
	I0528 21:58:44.174549   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:44.174838   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:44.177623   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.178024   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.178052   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.178250   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.180956   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.181305   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.181334   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.181500   73188 provision.go:143] copyHostCerts
	I0528 21:58:44.181571   73188 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:58:44.181582   73188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:58:44.181643   73188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:58:44.181753   73188 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:58:44.181787   73188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:58:44.181819   73188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:58:44.181892   73188 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:58:44.181899   73188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:58:44.181920   73188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:58:44.181984   73188 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-249165 san=[127.0.0.1 192.168.72.48 default-k8s-diff-port-249165 localhost minikube]
	I0528 21:58:44.490074   73188 provision.go:177] copyRemoteCerts
	I0528 21:58:44.490127   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:58:44.490150   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.492735   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.493121   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.493156   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.493306   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.493526   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.493690   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.493845   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:44.575620   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 21:58:44.601185   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:58:44.625266   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0528 21:58:44.648243   73188 provision.go:87] duration metric: took 473.69068ms to configureAuth
	I0528 21:58:44.648271   73188 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:58:44.648430   73188 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:58:44.648502   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.651430   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.651793   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.651820   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.651960   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.652140   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.652277   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.652436   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.652592   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:44.652762   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:44.652777   73188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:58:44.923577   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:58:44.923597   73188 machine.go:97] duration metric: took 1.093358522s to provisionDockerMachine
	I0528 21:58:44.923607   73188 start.go:293] postStartSetup for "default-k8s-diff-port-249165" (driver="kvm2")
	I0528 21:58:44.923618   73188 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:58:44.923649   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:44.924030   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:58:44.924124   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.926704   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.927009   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.927038   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.927162   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.927347   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.927491   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.927627   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:45.009429   73188 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:58:45.014007   73188 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 21:58:45.014032   73188 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:58:45.014094   73188 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:58:45.014161   73188 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:58:45.014265   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:58:45.024039   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:58:45.050461   73188 start.go:296] duration metric: took 126.842658ms for postStartSetup
	I0528 21:58:45.050497   73188 fix.go:56] duration metric: took 21.754803931s for fixHost
	I0528 21:58:45.050519   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:45.053312   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.053639   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.053671   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.053821   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:45.054025   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.054198   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.054339   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:45.054475   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:45.054646   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:45.054657   73188 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 21:58:45.159430   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716933525.136417037
	
	I0528 21:58:45.159460   73188 fix.go:216] guest clock: 1716933525.136417037
	I0528 21:58:45.159470   73188 fix.go:229] Guest: 2024-05-28 21:58:45.136417037 +0000 UTC Remote: 2024-05-28 21:58:45.05050169 +0000 UTC m=+304.341994853 (delta=85.915347ms)
	I0528 21:58:45.159495   73188 fix.go:200] guest clock delta is within tolerance: 85.915347ms
	I0528 21:58:45.159502   73188 start.go:83] releasing machines lock for "default-k8s-diff-port-249165", held for 21.863825672s
	I0528 21:58:45.159552   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.159830   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:45.162709   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.163053   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.163089   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.163264   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.163717   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.163931   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.164028   73188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:58:45.164072   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:45.164139   73188 ssh_runner.go:195] Run: cat /version.json
	I0528 21:58:45.164164   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:45.167063   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167215   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167477   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.167505   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167534   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.167551   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167605   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:45.167811   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:45.167826   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.167992   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:45.167998   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.168132   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:45.168152   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:45.168279   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:45.243473   73188 ssh_runner.go:195] Run: systemctl --version
	I0528 21:58:45.275272   73188 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:58:45.416616   73188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 21:58:45.423144   73188 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:58:45.423203   73188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:58:45.438939   73188 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 21:58:45.438963   73188 start.go:494] detecting cgroup driver to use...
	I0528 21:58:45.439035   73188 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:58:45.454944   73188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:58:45.469976   73188 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:58:45.470031   73188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:58:45.484152   73188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:58:45.497541   73188 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:58:45.622055   73188 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:58:45.760388   73188 docker.go:233] disabling docker service ...
	I0528 21:58:45.760472   73188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:58:45.779947   73188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:58:45.794310   73188 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:58:45.926921   73188 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:58:46.042042   73188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:58:46.055486   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:58:46.074285   73188 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 21:58:46.074347   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.084646   73188 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:58:46.084709   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.094701   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.104877   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.115549   73188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:58:46.125973   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.136293   73188 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.153570   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.165428   73188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:58:46.175167   73188 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 21:58:46.175224   73188 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 21:58:46.189687   73188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:58:46.199630   73188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:58:46.322596   73188 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:58:46.465841   73188 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:58:46.465905   73188 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:58:46.471249   73188 start.go:562] Will wait 60s for crictl version
	I0528 21:58:46.471301   73188 ssh_runner.go:195] Run: which crictl
	I0528 21:58:46.474963   73188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:58:46.514028   73188 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 21:58:46.514111   73188 ssh_runner.go:195] Run: crio --version
	I0528 21:58:46.544060   73188 ssh_runner.go:195] Run: crio --version
	I0528 21:58:46.577448   73188 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 21:58:46.578815   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:46.581500   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:46.581876   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:46.581918   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:46.582081   73188 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0528 21:58:46.586277   73188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:58:46.599163   73188 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:58:46.599265   73188 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:58:46.599308   73188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:58:46.636824   73188 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0528 21:58:46.636895   73188 ssh_runner.go:195] Run: which lz4
	I0528 21:58:46.640890   73188 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 21:58:46.645433   73188 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 21:58:46.645457   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0528 21:58:48.069572   73188 crio.go:462] duration metric: took 1.428706508s to copy over tarball
	I0528 21:58:48.069660   73188 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 21:58:50.289428   73188 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.2197347s)
	I0528 21:58:50.289459   73188 crio.go:469] duration metric: took 2.219854472s to extract the tarball
	I0528 21:58:50.289466   73188 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 21:58:50.329649   73188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:58:50.373900   73188 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:58:50.373922   73188 cache_images.go:84] Images are preloaded, skipping loading
	I0528 21:58:50.373928   73188 kubeadm.go:928] updating node { 192.168.72.48 8444 v1.30.1 crio true true} ...
	I0528 21:58:50.374059   73188 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-249165 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:58:50.374142   73188 ssh_runner.go:195] Run: crio config
	I0528 21:58:50.430538   73188 cni.go:84] Creating CNI manager for ""
	I0528 21:58:50.430573   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:58:50.430590   73188 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:58:50.430618   73188 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.48 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-249165 NodeName:default-k8s-diff-port-249165 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 21:58:50.430754   73188 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.48
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-249165"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:58:50.430822   73188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 21:58:50.440906   73188 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:58:50.440961   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:58:50.450354   73188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0528 21:58:50.467008   73188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:58:50.483452   73188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0528 21:58:50.500551   73188 ssh_runner.go:195] Run: grep 192.168.72.48	control-plane.minikube.internal$ /etc/hosts
	I0528 21:58:50.504597   73188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:58:50.516659   73188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:58:50.634433   73188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:58:50.651819   73188 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165 for IP: 192.168.72.48
	I0528 21:58:50.651844   73188 certs.go:194] generating shared ca certs ...
	I0528 21:58:50.651868   73188 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:58:50.652040   73188 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 21:58:50.652109   73188 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 21:58:50.652124   73188 certs.go:256] generating profile certs ...
	I0528 21:58:50.652223   73188 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/client.key
	I0528 21:58:50.652298   73188 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/apiserver.key.3e2f4fca
	I0528 21:58:50.652351   73188 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/proxy-client.key
	I0528 21:58:50.652505   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 21:58:50.652546   73188 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 21:58:50.652558   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 21:58:50.652589   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 21:58:50.652617   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:58:50.652645   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 21:58:50.652687   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:58:50.653356   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:58:50.687329   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:58:50.731844   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:58:50.758921   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:58:50.793162   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0528 21:58:50.820772   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 21:58:50.849830   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:58:50.875695   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 21:58:50.900876   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 21:58:50.925424   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:58:50.949453   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 21:58:50.973597   73188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:58:50.990297   73188 ssh_runner.go:195] Run: openssl version
	I0528 21:58:50.996164   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 21:58:51.007959   73188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 21:58:51.012987   73188 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:58:51.013062   73188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 21:58:51.019526   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 21:58:51.031068   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 21:58:51.043064   73188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 21:58:51.048507   73188 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:58:51.048600   73188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 21:58:51.054818   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 21:58:51.065829   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:58:51.076414   73188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:58:51.081090   73188 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:58:51.081141   73188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:58:51.086736   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:58:51.096968   73188 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:58:51.101288   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 21:58:51.107082   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 21:58:51.112759   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 21:58:51.118504   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 21:58:51.124067   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 21:58:51.129783   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 21:58:51.135390   73188 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:58:51.135521   73188 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:58:51.135583   73188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:58:51.173919   73188 cri.go:89] found id: ""
	I0528 21:58:51.173995   73188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0528 21:58:51.184361   73188 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 21:58:51.184381   73188 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 21:58:51.184386   73188 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 21:58:51.184424   73188 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 21:58:51.194386   73188 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:58:51.195726   73188 kubeconfig.go:125] found "default-k8s-diff-port-249165" server: "https://192.168.72.48:8444"
	I0528 21:58:51.198799   73188 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 21:58:51.208118   73188 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.48
	I0528 21:58:51.208146   73188 kubeadm.go:1154] stopping kube-system containers ...
	I0528 21:58:51.208157   73188 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0528 21:58:51.208193   73188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:58:51.252026   73188 cri.go:89] found id: ""
	I0528 21:58:51.252089   73188 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 21:58:51.269404   73188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:58:51.279728   73188 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:58:51.279744   73188 kubeadm.go:156] found existing configuration files:
	
	I0528 21:58:51.279790   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0528 21:58:51.289352   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:58:51.289396   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:58:51.299059   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0528 21:58:51.308375   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:58:51.308425   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:58:51.317866   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0528 21:58:51.327433   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:58:51.327488   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:58:51.337148   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0528 21:58:51.346358   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:58:51.346410   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:58:51.355689   73188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:58:51.365235   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:51.488772   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.553360   73188 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.064544437s)
	I0528 21:58:52.553398   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.780281   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.839188   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.914117   73188 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:58:52.914222   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:58:53.415170   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:58:53.914987   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:58:53.933842   73188 api_server.go:72] duration metric: took 1.019725255s to wait for apiserver process to appear ...
	I0528 21:58:53.933869   73188 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:58:53.933886   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:53.934358   73188 api_server.go:269] stopped: https://192.168.72.48:8444/healthz: Get "https://192.168.72.48:8444/healthz": dial tcp 192.168.72.48:8444: connect: connection refused
	I0528 21:58:54.434146   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:56.813345   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:58:56.813384   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:58:56.813396   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:56.821906   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:58:56.821935   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:58:56.934069   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:56.941002   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:56.941034   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:57.434777   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:57.439312   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:57.439345   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:57.934912   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:57.941171   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:57.941201   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:58.434198   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:58.438164   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:58.438190   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:58.934813   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:58.939873   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:58.939899   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:59.434373   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:59.438639   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:59.438662   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:59.934909   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:59.940297   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:59.940331   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:59:00.434920   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:59:00.440734   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 200:
	ok
	I0528 21:59:00.447107   73188 api_server.go:141] control plane version: v1.30.1
	I0528 21:59:00.447129   73188 api_server.go:131] duration metric: took 6.513254325s to wait for apiserver health ...
	I0528 21:59:00.447137   73188 cni.go:84] Creating CNI manager for ""
	I0528 21:59:00.447143   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:59:00.449008   73188 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 21:59:00.450184   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 21:59:00.461520   73188 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 21:59:00.480494   73188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:59:00.491722   73188 system_pods.go:59] 8 kube-system pods found
	I0528 21:59:00.491755   73188 system_pods.go:61] "coredns-7db6d8ff4d-qk6tz" [d3250a5a-2eda-41d3-86e2-227e85da8cb6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 21:59:00.491764   73188 system_pods.go:61] "etcd-default-k8s-diff-port-249165" [e1179b11-47b9-4803-91bb-a8d8470aac40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 21:59:00.491771   73188 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-249165" [7f6c0680-8827-4f15-90e5-f8d9e1d1bc8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 21:59:00.491780   73188 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-249165" [4d6f8bb3-0f4b-41fa-9b02-3b2c79513bf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 21:59:00.491786   73188 system_pods.go:61] "kube-proxy-fvmjv" [df55e25a-a79a-4293-9636-31f5ebc4fc77] Running
	I0528 21:59:00.491791   73188 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-249165" [82200561-6687-448d-b73f-d0e047dec773] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 21:59:00.491797   73188 system_pods.go:61] "metrics-server-569cc877fc-k2q4p" [d1ec23de-6293-42a8-80f3-e28e007b6a34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:59:00.491802   73188 system_pods.go:61] "storage-provisioner" [1f84dc9c-6b4e-44c9-82a2-5dabcb0b2178] Running
	I0528 21:59:00.491808   73188 system_pods.go:74] duration metric: took 11.287283ms to wait for pod list to return data ...
	I0528 21:59:00.491817   73188 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:59:00.495098   73188 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:59:00.495124   73188 node_conditions.go:123] node cpu capacity is 2
	I0528 21:59:00.495135   73188 node_conditions.go:105] duration metric: took 3.313626ms to run NodePressure ...
	I0528 21:59:00.495151   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:59:00.782161   73188 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0528 21:59:00.786287   73188 kubeadm.go:733] kubelet initialised
	I0528 21:59:00.786308   73188 kubeadm.go:734] duration metric: took 4.112496ms waiting for restarted kubelet to initialise ...
	I0528 21:59:00.786316   73188 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:59:00.790951   73188 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.795459   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.795486   73188 pod_ready.go:81] duration metric: took 4.510715ms for pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.795496   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.795505   73188 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.799372   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.799395   73188 pod_ready.go:81] duration metric: took 3.878119ms for pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.799405   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.799412   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.803708   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.803732   73188 pod_ready.go:81] duration metric: took 4.312817ms for pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.803744   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.803752   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.883526   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.883552   73188 pod_ready.go:81] duration metric: took 79.787719ms for pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.883562   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.883569   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fvmjv" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:01.284553   73188 pod_ready.go:92] pod "kube-proxy-fvmjv" in "kube-system" namespace has status "Ready":"True"
	I0528 21:59:01.284580   73188 pod_ready.go:81] duration metric: took 401.003384ms for pod "kube-proxy-fvmjv" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:01.284590   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:03.293222   73188 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:04.291145   73188 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"True"
	I0528 21:59:04.291171   73188 pod_ready.go:81] duration metric: took 3.006571778s for pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:04.291183   73188 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:06.297256   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:08.299092   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:10.797261   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:12.797546   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:15.297532   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:17.297769   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:19.298152   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:21.797794   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:24.298073   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:26.797503   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:29.297699   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:31.298091   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:33.799278   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:36.298358   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:38.298659   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:40.797501   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:43.297098   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:45.297322   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:47.798004   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:49.798749   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:52.296950   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:54.297779   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:56.297921   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:58.797953   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:01.297566   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:03.302555   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:05.797610   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:07.797893   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:09.798237   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:12.297953   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:14.298232   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:16.798660   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:19.296867   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:21.297325   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:23.797687   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:26.298657   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:28.798073   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:31.299219   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:33.800018   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:36.297914   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:38.297984   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:40.796919   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:42.798156   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:44.800231   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:47.297425   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:49.800316   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:52.297415   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:54.297549   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:56.798787   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:59.297851   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:01.298008   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:03.298732   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:05.797817   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:07.797913   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:10.297286   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:12.797866   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:14.799144   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:17.297592   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:19.298065   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:21.797973   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:23.798794   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:26.298087   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:28.300587   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:30.797976   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:33.297574   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:35.298403   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:37.797436   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:40.300414   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:42.797172   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:45.297340   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:47.297684   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:49.298815   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:51.299597   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:53.798447   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:56.297483   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:58.298264   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:00.798507   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:03.297276   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:05.299518   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:07.799770   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:10.300402   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:12.796971   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:14.798057   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:16.798315   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:18.800481   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:21.298816   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:23.797133   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:25.798165   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:28.297030   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:30.797031   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:32.797960   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:34.798334   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:37.298013   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:39.797122   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:42.297054   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:44.297976   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:46.797135   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:48.797338   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:50.797608   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:53.299621   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:55.797973   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:57.798174   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:00.298537   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:02.796804   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:04.291841   73188 pod_ready.go:81] duration metric: took 4m0.000641837s for pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace to be "Ready" ...
	E0528 22:03:04.291876   73188 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0528 22:03:04.291893   73188 pod_ready.go:38] duration metric: took 4m3.505569148s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:03:04.291917   73188 kubeadm.go:591] duration metric: took 4m13.107527237s to restartPrimaryControlPlane
	W0528 22:03:04.291969   73188 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0528 22:03:04.291999   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0528 22:03:35.997887   73188 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.705862339s)
	I0528 22:03:35.997980   73188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 22:03:36.013927   73188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 22:03:36.023856   73188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 22:03:36.033329   73188 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 22:03:36.033349   73188 kubeadm.go:156] found existing configuration files:
	
	I0528 22:03:36.033385   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0528 22:03:36.042504   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 22:03:36.042555   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 22:03:36.051990   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0528 22:03:36.061602   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 22:03:36.061672   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 22:03:36.071582   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0528 22:03:36.081217   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 22:03:36.081289   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 22:03:36.091380   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0528 22:03:36.101427   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 22:03:36.101491   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 22:03:36.111166   73188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 22:03:36.167427   73188 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 22:03:36.167584   73188 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 22:03:36.319657   73188 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 22:03:36.319762   73188 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 22:03:36.319861   73188 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 22:03:36.570417   73188 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 22:03:36.572409   73188 out.go:204]   - Generating certificates and keys ...
	I0528 22:03:36.572503   73188 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 22:03:36.572615   73188 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 22:03:36.572723   73188 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 22:03:36.572801   73188 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0528 22:03:36.572895   73188 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0528 22:03:36.572944   73188 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0528 22:03:36.572999   73188 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0528 22:03:36.573087   73188 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0528 22:03:36.573192   73188 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 22:03:36.573348   73188 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 22:03:36.573818   73188 kubeadm.go:309] [certs] Using the existing "sa" key
	I0528 22:03:36.573889   73188 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 22:03:36.671532   73188 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 22:03:36.741211   73188 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 22:03:36.908326   73188 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 22:03:37.058636   73188 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 22:03:37.237907   73188 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 22:03:37.238660   73188 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 22:03:37.242660   73188 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 22:03:37.244632   73188 out.go:204]   - Booting up control plane ...
	I0528 22:03:37.244721   73188 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 22:03:37.244790   73188 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 22:03:37.244999   73188 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 22:03:37.267448   73188 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 22:03:37.268482   73188 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 22:03:37.268550   73188 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 22:03:37.405936   73188 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 22:03:37.406050   73188 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 22:03:37.907833   73188 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.378139ms
	I0528 22:03:37.907936   73188 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 22:03:42.910213   73188 kubeadm.go:309] [api-check] The API server is healthy after 5.00224578s
	I0528 22:03:42.926650   73188 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 22:03:42.943917   73188 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 22:03:42.972044   73188 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 22:03:42.972264   73188 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-249165 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 22:03:42.986882   73188 kubeadm.go:309] [bootstrap-token] Using token: cf4624.vgyi0c4jykmr5x8u
	I0528 22:03:42.988295   73188 out.go:204]   - Configuring RBAC rules ...
	I0528 22:03:42.988438   73188 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 22:03:42.994583   73188 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 22:03:43.003191   73188 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 22:03:43.007110   73188 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 22:03:43.014038   73188 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 22:03:43.022358   73188 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 22:03:43.322836   73188 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 22:03:43.790286   73188 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 22:03:44.317555   73188 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 22:03:44.318811   73188 kubeadm.go:309] 
	I0528 22:03:44.318906   73188 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 22:03:44.318933   73188 kubeadm.go:309] 
	I0528 22:03:44.319041   73188 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 22:03:44.319052   73188 kubeadm.go:309] 
	I0528 22:03:44.319073   73188 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 22:03:44.319128   73188 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 22:03:44.319171   73188 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 22:03:44.319178   73188 kubeadm.go:309] 
	I0528 22:03:44.319333   73188 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 22:03:44.319349   73188 kubeadm.go:309] 
	I0528 22:03:44.319390   73188 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 22:03:44.319395   73188 kubeadm.go:309] 
	I0528 22:03:44.319437   73188 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 22:03:44.319501   73188 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 22:03:44.319597   73188 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 22:03:44.319617   73188 kubeadm.go:309] 
	I0528 22:03:44.319758   73188 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 22:03:44.319881   73188 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 22:03:44.319894   73188 kubeadm.go:309] 
	I0528 22:03:44.320006   73188 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token cf4624.vgyi0c4jykmr5x8u \
	I0528 22:03:44.320098   73188 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb \
	I0528 22:03:44.320118   73188 kubeadm.go:309] 	--control-plane 
	I0528 22:03:44.320125   73188 kubeadm.go:309] 
	I0528 22:03:44.320201   73188 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 22:03:44.320209   73188 kubeadm.go:309] 
	I0528 22:03:44.320284   73188 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token cf4624.vgyi0c4jykmr5x8u \
	I0528 22:03:44.320405   73188 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb 
	I0528 22:03:44.320885   73188 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 22:03:44.320929   73188 cni.go:84] Creating CNI manager for ""
	I0528 22:03:44.320945   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 22:03:44.322688   73188 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 22:03:44.323999   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 22:03:44.335532   73188 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 22:03:44.356272   73188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 22:03:44.356380   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:44.356387   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-249165 minikube.k8s.io/updated_at=2024_05_28T22_03_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=default-k8s-diff-port-249165 minikube.k8s.io/primary=true
	I0528 22:03:44.384624   73188 ops.go:34] apiserver oom_adj: -16
	I0528 22:03:44.563265   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:45.063599   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:45.563789   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:46.063279   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:46.564010   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:47.063573   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:47.563386   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:48.064282   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:48.563854   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:49.063459   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:49.564059   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:50.064286   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:50.564237   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:51.063435   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:51.563256   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:52.063661   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:52.563554   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:53.063681   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:53.563368   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:54.063863   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:54.563426   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:55.063793   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:55.564268   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.300236054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933836300212081,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48c1e38e-39e9-4ac3-a0c7-c7fc666105be name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.301088364Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d39e92a-51ef-48fc-9936-df2ae3b1b41d name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.301156581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d39e92a-51ef-48fc-9936-df2ae3b1b41d name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.301661448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716933060042350178,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6293b218a7f4ac452285cb7a65b1cc98ac1fbfb6c10c4e590c6dc8f7e3d295,PodSandboxId:7b2a6ef244bb4e90bffdd1a1d60935ce85eb6c6a064b196112c47571d4693a2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716933039907350972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b912c7e-7dc0-406d-934e-56f8c76293b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3be541bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b,PodSandboxId:9ef91c405fbc6f4838b947b9c9f47db5c1422301c1fbbce84edd53778bdbcd51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933036937140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a084dfb5-5818-4244-9052-a9f861b45617,},Annotations:map[string]string{io.kubernetes.container.hash: fc6b3bd4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910,PodSandboxId:2914453aecb392789a4523498032d124e3ee272d48cf1fdf6f6ee55a4f928f7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716933029192948487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w45qh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f962c73d-872d-4f78-a6
28-267cb0be49bb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a301e43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716933029185108588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac
36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af,PodSandboxId:37c3f2a6893a0ff6fe9f38f34348d82cd4cb94bf3fa884519ae0a93a6a250a19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933025526288076,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c52ccd26e857fd3c5eca30f8dbd103f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc,PodSandboxId:24cc59eab1e5a3ec0585d385ca7d0de4c8f23ca6532ca7464cf28ba6ffa528db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933025557677625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0bcb2cd3d47aad67c2dd098b794a5d7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c73b998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a,PodSandboxId:b15ca0befb6f4a1b46904e62c844e9cf4a9cb70e55a6ae50f78b4126561ac5f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933025526640126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3357f39709a332110267d0f3d64c4674,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e,PodSandboxId:365b73e6cf561c95c62b4c8a0e57b4a49f788144f89a8c6e304cad545934fe77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933025455688727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5222ebcf86d1db94279a588215feff43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 5e86551b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d39e92a-51ef-48fc-9936-df2ae3b1b41d name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.341257204Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c44d265-5dbd-4188-8e1d-99203bbc3c35 name=/runtime.v1.RuntimeService/Version
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.341385595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c44d265-5dbd-4188-8e1d-99203bbc3c35 name=/runtime.v1.RuntimeService/Version
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.342992290Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ecbf57e1-c1f2-47b9-a699-32c1f86349dd name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.343567626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933836343539681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecbf57e1-c1f2-47b9-a699-32c1f86349dd name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.344237430Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c692bd0-b97d-4307-8da2-7300f9e284e5 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.344311305Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c692bd0-b97d-4307-8da2-7300f9e284e5 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.344747155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716933060042350178,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6293b218a7f4ac452285cb7a65b1cc98ac1fbfb6c10c4e590c6dc8f7e3d295,PodSandboxId:7b2a6ef244bb4e90bffdd1a1d60935ce85eb6c6a064b196112c47571d4693a2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716933039907350972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b912c7e-7dc0-406d-934e-56f8c76293b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3be541bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b,PodSandboxId:9ef91c405fbc6f4838b947b9c9f47db5c1422301c1fbbce84edd53778bdbcd51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933036937140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a084dfb5-5818-4244-9052-a9f861b45617,},Annotations:map[string]string{io.kubernetes.container.hash: fc6b3bd4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910,PodSandboxId:2914453aecb392789a4523498032d124e3ee272d48cf1fdf6f6ee55a4f928f7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716933029192948487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w45qh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f962c73d-872d-4f78-a6
28-267cb0be49bb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a301e43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716933029185108588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac
36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af,PodSandboxId:37c3f2a6893a0ff6fe9f38f34348d82cd4cb94bf3fa884519ae0a93a6a250a19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933025526288076,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c52ccd26e857fd3c5eca30f8dbd103f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc,PodSandboxId:24cc59eab1e5a3ec0585d385ca7d0de4c8f23ca6532ca7464cf28ba6ffa528db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933025557677625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0bcb2cd3d47aad67c2dd098b794a5d7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c73b998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a,PodSandboxId:b15ca0befb6f4a1b46904e62c844e9cf4a9cb70e55a6ae50f78b4126561ac5f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933025526640126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3357f39709a332110267d0f3d64c4674,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e,PodSandboxId:365b73e6cf561c95c62b4c8a0e57b4a49f788144f89a8c6e304cad545934fe77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933025455688727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5222ebcf86d1db94279a588215feff43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 5e86551b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c692bd0-b97d-4307-8da2-7300f9e284e5 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.393335857Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb509e17-fc90-4571-915a-bacdadfe4204 name=/runtime.v1.RuntimeService/Version
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.393504794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb509e17-fc90-4571-915a-bacdadfe4204 name=/runtime.v1.RuntimeService/Version
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.395105249Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1e3714e-f2ae-4171-88c1-039311669050 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.395628328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933836395590389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1e3714e-f2ae-4171-88c1-039311669050 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.396329267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c12b082-a413-4cea-83a5-6d67482ef66c name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.396515059Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c12b082-a413-4cea-83a5-6d67482ef66c name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.396782179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716933060042350178,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6293b218a7f4ac452285cb7a65b1cc98ac1fbfb6c10c4e590c6dc8f7e3d295,PodSandboxId:7b2a6ef244bb4e90bffdd1a1d60935ce85eb6c6a064b196112c47571d4693a2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716933039907350972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b912c7e-7dc0-406d-934e-56f8c76293b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3be541bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b,PodSandboxId:9ef91c405fbc6f4838b947b9c9f47db5c1422301c1fbbce84edd53778bdbcd51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933036937140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a084dfb5-5818-4244-9052-a9f861b45617,},Annotations:map[string]string{io.kubernetes.container.hash: fc6b3bd4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910,PodSandboxId:2914453aecb392789a4523498032d124e3ee272d48cf1fdf6f6ee55a4f928f7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716933029192948487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w45qh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f962c73d-872d-4f78-a6
28-267cb0be49bb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a301e43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716933029185108588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac
36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af,PodSandboxId:37c3f2a6893a0ff6fe9f38f34348d82cd4cb94bf3fa884519ae0a93a6a250a19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933025526288076,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c52ccd26e857fd3c5eca30f8dbd103f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc,PodSandboxId:24cc59eab1e5a3ec0585d385ca7d0de4c8f23ca6532ca7464cf28ba6ffa528db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933025557677625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0bcb2cd3d47aad67c2dd098b794a5d7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c73b998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a,PodSandboxId:b15ca0befb6f4a1b46904e62c844e9cf4a9cb70e55a6ae50f78b4126561ac5f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933025526640126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3357f39709a332110267d0f3d64c4674,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e,PodSandboxId:365b73e6cf561c95c62b4c8a0e57b4a49f788144f89a8c6e304cad545934fe77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933025455688727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5222ebcf86d1db94279a588215feff43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 5e86551b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c12b082-a413-4cea-83a5-6d67482ef66c name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.436189860Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cabb7c4b-c81c-4aa3-9b4b-fb46d04ac482 name=/runtime.v1.RuntimeService/Version
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.436291811Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cabb7c4b-c81c-4aa3-9b4b-fb46d04ac482 name=/runtime.v1.RuntimeService/Version
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.437821258Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9883750-6be8-475b-b506-7faef3cc0cb6 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.438201195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933836438174502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9883750-6be8-475b-b506-7faef3cc0cb6 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.438997209Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9875a1e3-1db8-437e-b47a-4ceb353554c2 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.439072391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9875a1e3-1db8-437e-b47a-4ceb353554c2 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:03:56 no-preload-290122 crio[732]: time="2024-05-28 22:03:56.439322677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716933060042350178,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6293b218a7f4ac452285cb7a65b1cc98ac1fbfb6c10c4e590c6dc8f7e3d295,PodSandboxId:7b2a6ef244bb4e90bffdd1a1d60935ce85eb6c6a064b196112c47571d4693a2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716933039907350972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b912c7e-7dc0-406d-934e-56f8c76293b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3be541bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b,PodSandboxId:9ef91c405fbc6f4838b947b9c9f47db5c1422301c1fbbce84edd53778bdbcd51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933036937140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a084dfb5-5818-4244-9052-a9f861b45617,},Annotations:map[string]string{io.kubernetes.container.hash: fc6b3bd4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910,PodSandboxId:2914453aecb392789a4523498032d124e3ee272d48cf1fdf6f6ee55a4f928f7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716933029192948487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w45qh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f962c73d-872d-4f78-a6
28-267cb0be49bb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a301e43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716933029185108588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac
36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af,PodSandboxId:37c3f2a6893a0ff6fe9f38f34348d82cd4cb94bf3fa884519ae0a93a6a250a19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933025526288076,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c52ccd26e857fd3c5eca30f8dbd103f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc,PodSandboxId:24cc59eab1e5a3ec0585d385ca7d0de4c8f23ca6532ca7464cf28ba6ffa528db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933025557677625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0bcb2cd3d47aad67c2dd098b794a5d7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c73b998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a,PodSandboxId:b15ca0befb6f4a1b46904e62c844e9cf4a9cb70e55a6ae50f78b4126561ac5f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933025526640126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3357f39709a332110267d0f3d64c4674,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e,PodSandboxId:365b73e6cf561c95c62b4c8a0e57b4a49f788144f89a8c6e304cad545934fe77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933025455688727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5222ebcf86d1db94279a588215feff43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 5e86551b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9875a1e3-1db8-437e-b47a-4ceb353554c2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e80571418c7d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   c7ab7d7de21b7       storage-provisioner
	6e6293b218a7f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   7b2a6ef244bb4       busybox
	ebc2314ec3dcb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   9ef91c405fbc6       coredns-7db6d8ff4d-fmk2h
	9a787e20b35dd       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago      Running             kube-proxy                1                   2914453aecb39       kube-proxy-w45qh
	912c92cb728e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   c7ab7d7de21b7       storage-provisioner
	42608327556ea       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      13 minutes ago      Running             kube-apiserver            1                   24cc59eab1e5a       kube-apiserver-no-preload-290122
	e1f2c88b18006       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      13 minutes ago      Running             kube-controller-manager   1                   b15ca0befb6f4       kube-controller-manager-no-preload-290122
	e3d4c1df4c10f       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago      Running             kube-scheduler            1                   37c3f2a6893a0       kube-scheduler-no-preload-290122
	48e5c5e140f93       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   365b73e6cf561       etcd-no-preload-290122
	
	
	==> coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50005 - 31532 "HINFO IN 7776364950442401203.7578220013407324169. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00938918s
	
	
	==> describe nodes <==
	Name:               no-preload-290122
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-290122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=no-preload-290122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T21_40_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:40:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-290122
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 22:03:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 22:01:12 +0000   Tue, 28 May 2024 21:40:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 22:01:12 +0000   Tue, 28 May 2024 21:40:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 22:01:12 +0000   Tue, 28 May 2024 21:40:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 22:01:12 +0000   Tue, 28 May 2024 21:50:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.138
	  Hostname:    no-preload-290122
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca6de69c011242d2b09e549a99f050f4
	  System UUID:                ca6de69c-0112-42d2-b09e-549a99f050f4
	  Boot ID:                    9b840b8d-7c5d-4481-b7a6-bca6f3fd097a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-fmk2h                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-no-preload-290122                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kube-apiserver-no-preload-290122             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-no-preload-290122    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-w45qh                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-no-preload-290122             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-569cc877fc-j2khc              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-290122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-290122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-290122 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeReady                23m                kubelet          Node no-preload-290122 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-290122 event: Registered Node no-preload-290122 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-290122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-290122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-290122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-290122 event: Registered Node no-preload-290122 in Controller
	
	
	==> dmesg <==
	[May28 21:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060137] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042512] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.731246] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.451621] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.482381] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May28 21:50] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.062272] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067999] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.196619] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.152020] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.297350] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[ +16.239014] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	[  +0.068653] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.363733] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device
	[  +4.597179] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.452823] systemd-fstab-generator[1979]: Ignoring "noauto" option for root device
	[  +3.327825] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.058167] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] <==
	{"level":"info","ts":"2024-05-28T21:50:26.034304Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.138:2380"}
	{"level":"info","ts":"2024-05-28T21:50:26.034473Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.138:2380"}
	{"level":"info","ts":"2024-05-28T21:50:26.034931Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8b11dde95a80b86b","initial-advertise-peer-urls":["https://192.168.50.138:2380"],"listen-peer-urls":["https://192.168.50.138:2380"],"advertise-client-urls":["https://192.168.50.138:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.138:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T21:50:26.036509Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T21:50:27.259496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-28T21:50:27.259558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-28T21:50:27.259606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b received MsgPreVoteResp from 8b11dde95a80b86b at term 2"}
	{"level":"info","ts":"2024-05-28T21:50:27.259624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b became candidate at term 3"}
	{"level":"info","ts":"2024-05-28T21:50:27.25963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b received MsgVoteResp from 8b11dde95a80b86b at term 3"}
	{"level":"info","ts":"2024-05-28T21:50:27.259658Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b11dde95a80b86b became leader at term 3"}
	{"level":"info","ts":"2024-05-28T21:50:27.259671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8b11dde95a80b86b elected leader 8b11dde95a80b86b at term 3"}
	{"level":"info","ts":"2024-05-28T21:50:27.27953Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8b11dde95a80b86b","local-member-attributes":"{Name:no-preload-290122 ClientURLs:[https://192.168.50.138:2379]}","request-path":"/0/members/8b11dde95a80b86b/attributes","cluster-id":"ab0e41ccc9bb2ba","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T21:50:27.279587Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:50:27.279686Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:50:27.282117Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.138:2379"}
	{"level":"info","ts":"2024-05-28T21:50:27.282208Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T21:50:27.282239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T21:50:27.284877Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-28T21:58:52.026142Z","caller":"traceutil/trace.go:171","msg":"trace[2018511596] transaction","detail":"{read_only:false; response_revision:1002; number_of_response:1; }","duration":"211.347913ms","start":"2024-05-28T21:58:51.814738Z","end":"2024-05-28T21:58:52.026086Z","steps":["trace[2018511596] 'process raft request'  (duration: 211.219863ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.929515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"639.014565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:58:52.930179Z","caller":"traceutil/trace.go:171","msg":"trace[1851811743] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1002; }","duration":"639.861959ms","start":"2024-05-28T21:58:52.290291Z","end":"2024-05-28T21:58:52.930153Z","steps":["trace[1851811743] 'range keys from in-memory index tree'  (duration: 638.969441ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.930285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:58:52.290277Z","time spent":"639.975101ms","remote":"127.0.0.1:44198","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"info","ts":"2024-05-28T22:00:27.33975Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":835}
	{"level":"info","ts":"2024-05-28T22:00:27.351663Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":835,"took":"11.508806ms","hash":3359253185,"current-db-size-bytes":2629632,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2629632,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-05-28T22:00:27.351734Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3359253185,"revision":835,"compact-revision":-1}
	
	
	==> kernel <==
	 22:03:56 up 14 min,  0 users,  load average: 0.26, 0.29, 0.20
	Linux no-preload-290122 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] <==
	Trace[1744710424]: [641.054428ms] [641.054428ms] END
	W0528 22:00:28.666612       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:00:28.666725       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0528 22:00:29.667184       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:00:29.667279       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:00:29.667306       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:00:29.667366       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:00:29.667506       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:00:29.668752       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:01:29.667934       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:01:29.668074       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:01:29.668107       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:01:29.669126       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:01:29.669191       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:01:29.669198       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:03:29.669294       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:03:29.669489       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:03:29.669515       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:03:29.669583       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:03:29.669655       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:03:29.671467       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] <==
	I0528 21:58:12.608255       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 21:58:42.140232       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 21:58:42.617622       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 21:59:12.144384       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 21:59:12.628475       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 21:59:42.150109       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 21:59:42.636106       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:00:12.157591       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:00:12.644054       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:00:42.163302       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:00:42.653685       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:01:12.168125       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:01:12.660924       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0528 22:01:30.865390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="254.342µs"
	E0528 22:01:42.173349       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:01:42.668866       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0528 22:01:44.864953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="50.061µs"
	E0528 22:02:12.179775       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:02:12.677151       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:02:42.185774       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:02:42.689364       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:03:12.190033       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:03:12.697394       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:03:42.196189       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:03:42.705296       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] <==
	I0528 21:50:29.368066       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:50:29.378278       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.138"]
	I0528 21:50:29.419761       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 21:50:29.419842       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 21:50:29.419870       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:50:29.422847       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:50:29.423078       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:50:29.423282       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:50:29.424941       1 config.go:192] "Starting service config controller"
	I0528 21:50:29.424999       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:50:29.425039       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:50:29.425060       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:50:29.426982       1 config.go:319] "Starting node config controller"
	I0528 21:50:29.427019       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:50:29.525801       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 21:50:29.525838       1 shared_informer.go:320] Caches are synced for service config
	I0528 21:50:29.527305       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] <==
	I0528 21:50:26.649713       1 serving.go:380] Generated self-signed cert in-memory
	W0528 21:50:28.604721       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 21:50:28.604764       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:50:28.604774       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 21:50:28.604780       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 21:50:28.668954       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 21:50:28.668999       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:50:28.672720       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0528 21:50:28.672811       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0528 21:50:28.672838       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 21:50:28.672857       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 21:50:28.773950       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 22:01:24 no-preload-290122 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:01:24 no-preload-290122 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:01:24 no-preload-290122 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:01:30 no-preload-290122 kubelet[1367]: E0528 22:01:30.848182    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:01:44 no-preload-290122 kubelet[1367]: E0528 22:01:44.848787    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:01:57 no-preload-290122 kubelet[1367]: E0528 22:01:57.849209    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:02:10 no-preload-290122 kubelet[1367]: E0528 22:02:10.849252    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:02:22 no-preload-290122 kubelet[1367]: E0528 22:02:22.848290    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:02:24 no-preload-290122 kubelet[1367]: E0528 22:02:24.866735    1367 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:02:24 no-preload-290122 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:02:24 no-preload-290122 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:02:24 no-preload-290122 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:02:24 no-preload-290122 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:02:33 no-preload-290122 kubelet[1367]: E0528 22:02:33.847778    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:02:44 no-preload-290122 kubelet[1367]: E0528 22:02:44.848630    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:02:56 no-preload-290122 kubelet[1367]: E0528 22:02:56.848040    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:03:10 no-preload-290122 kubelet[1367]: E0528 22:03:10.848214    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:03:24 no-preload-290122 kubelet[1367]: E0528 22:03:24.865924    1367 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:03:24 no-preload-290122 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:03:24 no-preload-290122 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:03:24 no-preload-290122 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:03:24 no-preload-290122 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:03:25 no-preload-290122 kubelet[1367]: E0528 22:03:25.854015    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:03:37 no-preload-290122 kubelet[1367]: E0528 22:03:37.848790    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:03:49 no-preload-290122 kubelet[1367]: E0528 22:03:49.848245    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	
	
	==> storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] <==
	I0528 21:51:00.173200       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 21:51:00.186474       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 21:51:00.187166       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 21:51:17.591804       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 21:51:17.592091       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-290122_079d0c91-a672-4362-a8a6-bea900690c58!
	I0528 21:51:17.592707       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4314861e-97db-4897-9ca7-3871b33d30d9", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-290122_079d0c91-a672-4362-a8a6-bea900690c58 became leader
	I0528 21:51:17.700227       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-290122_079d0c91-a672-4362-a8a6-bea900690c58!
	
	
	==> storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] <==
	I0528 21:50:29.309838       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0528 21:50:59.313671       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-290122 -n no-preload-290122
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-290122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-j2khc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-290122 describe pod metrics-server-569cc877fc-j2khc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-290122 describe pod metrics-server-569cc877fc-j2khc: exit status 1 (79.153268ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-j2khc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-290122 describe pod metrics-server-569cc877fc-j2khc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 21:57:37.450857   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 21:57:45.647131   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 21:57:59.175466   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 21:58:18.382261   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 21:58:20.452990   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 21:59:04.181837   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 21:59:25.052567   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 21:59:32.641633   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 21:59:42.597740   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 21:59:43.498392   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 21:59:45.763464   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:00:27.230121   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:00:48.096157   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:01:36.131907   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:01:55.337871   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:02:37.450961   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:03:20.453570   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:04:04.182476   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:04:25.051595   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:04:32.640939   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:04:42.597464   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:04:45.763341   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:05:40.499556   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-499466 -n old-k8s-version-499466
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-499466 -n old-k8s-version-499466: exit status 2 (225.598638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-499466" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466: exit status 2 (227.341294ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-499466 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-290122             | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-595279            | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-499466        | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-290122                  | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-595279                 | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-499466             | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-257793                              | cert-expiration-257793       | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807140 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	|         | disable-driver-mounts-807140                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:50 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-249165  | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC | 28 May 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC |                     |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-249165       | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC | 28 May 24 22:04 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:53:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:53:40.744358   73188 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:53:40.744653   73188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:53:40.744664   73188 out.go:304] Setting ErrFile to fd 2...
	I0528 21:53:40.744668   73188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:53:40.744923   73188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:53:40.745490   73188 out.go:298] Setting JSON to false
	I0528 21:53:40.746663   73188 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5764,"bootTime":1716927457,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:53:40.746723   73188 start.go:139] virtualization: kvm guest
	I0528 21:53:40.749013   73188 out.go:177] * [default-k8s-diff-port-249165] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:53:40.750611   73188 notify.go:220] Checking for updates...
	I0528 21:53:40.750618   73188 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:53:40.752116   73188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:53:40.753384   73188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:53:40.754612   73188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:53:40.755846   73188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:53:40.756972   73188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:53:40.758627   73188 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:53:40.759050   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.759106   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.774337   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I0528 21:53:40.774754   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.775318   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.775344   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.775633   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.775791   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.776007   73188 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:53:40.776327   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.776382   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.790531   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I0528 21:53:40.790970   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.791471   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.791498   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.791802   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.791983   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.826633   73188 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:53:40.827847   73188 start.go:297] selected driver: kvm2
	I0528 21:53:40.827863   73188 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:53:40.827981   73188 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:53:40.828705   73188 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:53:40.828777   73188 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:53:40.844223   73188 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:53:40.844574   73188 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:53:40.844638   73188 cni.go:84] Creating CNI manager for ""
	I0528 21:53:40.844650   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:53:40.844682   73188 start.go:340] cluster config:
	{Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:53:40.844775   73188 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:53:40.846544   73188 out.go:177] * Starting "default-k8s-diff-port-249165" primary control-plane node in "default-k8s-diff-port-249165" cluster
	I0528 21:53:40.847754   73188 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:53:40.847792   73188 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 21:53:40.847801   73188 cache.go:56] Caching tarball of preloaded images
	I0528 21:53:40.847870   73188 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:53:40.847880   73188 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 21:53:40.847964   73188 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/config.json ...
	I0528 21:53:40.848196   73188 start.go:360] acquireMachinesLock for default-k8s-diff-port-249165: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:53:40.848256   73188 start.go:364] duration metric: took 38.994µs to acquireMachinesLock for "default-k8s-diff-port-249165"
	I0528 21:53:40.848271   73188 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:53:40.848281   73188 fix.go:54] fixHost starting: 
	I0528 21:53:40.848534   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.848571   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.863227   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0528 21:53:40.863708   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.864162   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.864182   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.864616   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.864794   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.864952   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 21:53:40.866583   73188 fix.go:112] recreateIfNeeded on default-k8s-diff-port-249165: state=Running err=<nil>
	W0528 21:53:40.866600   73188 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:53:40.868382   73188 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-249165" VM ...
	I0528 21:53:38.450836   70002 logs.go:123] Gathering logs for storage-provisioner [9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d] ...
	I0528 21:53:38.450866   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d"
	I0528 21:53:38.485575   70002 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:38.485610   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:38.854290   70002 logs.go:123] Gathering logs for container status ...
	I0528 21:53:38.854325   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:38.902357   70002 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:38.902389   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:38.916785   70002 logs.go:123] Gathering logs for etcd [3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c] ...
	I0528 21:53:38.916820   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c"
	I0528 21:53:38.982119   70002 logs.go:123] Gathering logs for kube-apiserver [056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622] ...
	I0528 21:53:38.982148   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622"
	I0528 21:53:39.031038   70002 logs.go:123] Gathering logs for kube-proxy [cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc] ...
	I0528 21:53:39.031066   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc"
	I0528 21:53:39.068094   70002 logs.go:123] Gathering logs for kube-controller-manager [b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89] ...
	I0528 21:53:39.068123   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89"
	I0528 21:53:39.129214   70002 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:39.129248   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:39.191483   70002 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:39.191523   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:53:41.813698   70002 system_pods.go:59] 8 kube-system pods found
	I0528 21:53:41.813725   70002 system_pods.go:61] "coredns-7db6d8ff4d-8cb7b" [b3908d89-cfc6-4f1a-9aef-861aac0d3e29] Running
	I0528 21:53:41.813730   70002 system_pods.go:61] "etcd-embed-certs-595279" [58581274-a239-4367-926e-c333f201d4f8] Running
	I0528 21:53:41.813733   70002 system_pods.go:61] "kube-apiserver-embed-certs-595279" [cc2dd164-709f-4e59-81bc-ce9d30bbced9] Running
	I0528 21:53:41.813736   70002 system_pods.go:61] "kube-controller-manager-embed-certs-595279" [e049af67-fff8-466f-96ff-81d148602884] Running
	I0528 21:53:41.813739   70002 system_pods.go:61] "kube-proxy-pnl5w" [9c2c68bc-42c2-425e-ae35-a8c07b5d5221] Running
	I0528 21:53:41.813742   70002 system_pods.go:61] "kube-scheduler-embed-certs-595279" [ab6bff2a-7266-4c5c-96bc-c87ef15dd342] Running
	I0528 21:53:41.813748   70002 system_pods.go:61] "metrics-server-569cc877fc-f6fz2" [b5e432cd-3b95-4f20-b9b3-c498512a7564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:53:41.813751   70002 system_pods.go:61] "storage-provisioner" [7bf52279-1fbc-40e5-8376-992c545c55dd] Running
	I0528 21:53:41.813771   70002 system_pods.go:74] duration metric: took 3.894565784s to wait for pod list to return data ...
	I0528 21:53:41.813780   70002 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:53:41.816297   70002 default_sa.go:45] found service account: "default"
	I0528 21:53:41.816319   70002 default_sa.go:55] duration metric: took 2.532587ms for default service account to be created ...
	I0528 21:53:41.816326   70002 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:53:41.821407   70002 system_pods.go:86] 8 kube-system pods found
	I0528 21:53:41.821437   70002 system_pods.go:89] "coredns-7db6d8ff4d-8cb7b" [b3908d89-cfc6-4f1a-9aef-861aac0d3e29] Running
	I0528 21:53:41.821447   70002 system_pods.go:89] "etcd-embed-certs-595279" [58581274-a239-4367-926e-c333f201d4f8] Running
	I0528 21:53:41.821453   70002 system_pods.go:89] "kube-apiserver-embed-certs-595279" [cc2dd164-709f-4e59-81bc-ce9d30bbced9] Running
	I0528 21:53:41.821458   70002 system_pods.go:89] "kube-controller-manager-embed-certs-595279" [e049af67-fff8-466f-96ff-81d148602884] Running
	I0528 21:53:41.821461   70002 system_pods.go:89] "kube-proxy-pnl5w" [9c2c68bc-42c2-425e-ae35-a8c07b5d5221] Running
	I0528 21:53:41.821465   70002 system_pods.go:89] "kube-scheduler-embed-certs-595279" [ab6bff2a-7266-4c5c-96bc-c87ef15dd342] Running
	I0528 21:53:41.821472   70002 system_pods.go:89] "metrics-server-569cc877fc-f6fz2" [b5e432cd-3b95-4f20-b9b3-c498512a7564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:53:41.821480   70002 system_pods.go:89] "storage-provisioner" [7bf52279-1fbc-40e5-8376-992c545c55dd] Running
	I0528 21:53:41.821489   70002 system_pods.go:126] duration metric: took 5.157831ms to wait for k8s-apps to be running ...
	I0528 21:53:41.821498   70002 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:53:41.821538   70002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:53:41.838819   70002 system_svc.go:56] duration metric: took 17.315204ms WaitForService to wait for kubelet
	I0528 21:53:41.838844   70002 kubeadm.go:576] duration metric: took 4m26.419891509s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:53:41.838864   70002 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:53:41.841408   70002 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:53:41.841424   70002 node_conditions.go:123] node cpu capacity is 2
	I0528 21:53:41.841433   70002 node_conditions.go:105] duration metric: took 2.56566ms to run NodePressure ...
	I0528 21:53:41.841445   70002 start.go:240] waiting for startup goroutines ...
	I0528 21:53:41.841452   70002 start.go:245] waiting for cluster config update ...
	I0528 21:53:41.841463   70002 start.go:254] writing updated cluster config ...
	I0528 21:53:41.841709   70002 ssh_runner.go:195] Run: rm -f paused
	I0528 21:53:41.886820   70002 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:53:41.888710   70002 out.go:177] * Done! kubectl is now configured to use "embed-certs-595279" cluster and "default" namespace by default
	I0528 21:53:40.749506   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:43.248909   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:40.869524   73188 machine.go:94] provisionDockerMachine start ...
	I0528 21:53:40.869542   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.869730   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:53:40.872099   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:53:40.872470   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:53:40.872491   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:53:40.872625   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:53:40.872772   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:53:40.872952   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:53:40.873092   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:53:40.873253   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:53:40.873429   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:53:40.873438   73188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:53:43.770029   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:45.748750   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:48.248904   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:46.841982   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:50.249442   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:52.749680   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:52.922023   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:55.251148   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:57.748960   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:55.994071   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:59.749114   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:02.248306   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:02.074025   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:05.145996   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:04.248616   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:06.749284   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:09.247806   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:11.249481   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:13.748196   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:12.825536   70393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:54:12.825810   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:12.826159   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:14.266167   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:15.749468   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:18.248675   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:17.826706   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:17.826945   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:17.338025   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:20.248941   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:22.749284   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:23.417971   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:25.248681   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:27.748556   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:27.827370   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:27.827610   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:26.490049   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:29.748865   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:32.248746   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:32.569987   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:35.641969   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:34.249483   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:36.748835   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:38.749264   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:41.251039   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:43.248816   69886 pod_ready.go:81] duration metric: took 4m0.006582939s for pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace to be "Ready" ...
	E0528 21:54:43.248839   69886 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0528 21:54:43.248847   69886 pod_ready.go:38] duration metric: took 4m4.041932949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:54:43.248863   69886 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:54:43.248889   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:43.248933   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:43.296609   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:43.296630   69886 cri.go:89] found id: ""
	I0528 21:54:43.296638   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:43.296694   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.301171   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:43.301211   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:43.340772   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:43.340793   69886 cri.go:89] found id: ""
	I0528 21:54:43.340799   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:43.340843   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.345422   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:43.345489   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:43.392432   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:43.392458   69886 cri.go:89] found id: ""
	I0528 21:54:43.392467   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:43.392521   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.396870   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:43.396943   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:43.433491   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:43.433516   69886 cri.go:89] found id: ""
	I0528 21:54:43.433525   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:43.433584   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.438209   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:43.438276   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:43.479257   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:43.479299   69886 cri.go:89] found id: ""
	I0528 21:54:43.479309   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:43.479425   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.484063   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:43.484127   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:43.523360   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:43.523384   69886 cri.go:89] found id: ""
	I0528 21:54:43.523394   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:43.523443   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.527859   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:43.527915   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:43.565610   69886 cri.go:89] found id: ""
	I0528 21:54:43.565631   69886 logs.go:276] 0 containers: []
	W0528 21:54:43.565638   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:43.565643   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:43.565687   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:43.603133   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:43.603155   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:43.603159   69886 cri.go:89] found id: ""
	I0528 21:54:43.603166   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:43.603233   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.607421   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.611570   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:43.611593   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:43.656455   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:43.656483   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:43.708385   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:43.708416   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:43.766267   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:43.766300   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:43.813734   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:43.813782   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:43.857289   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:43.857317   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:43.897976   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:43.898001   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:41.721973   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:44.798063   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:44.394070   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:44.394112   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:44.450041   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:44.450078   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:44.464067   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:44.464092   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:44.588402   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:44.588432   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:44.631477   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:44.631505   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:44.676531   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:44.676562   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:47.229026   69886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:54:47.247014   69886 api_server.go:72] duration metric: took 4m15.746572678s to wait for apiserver process to appear ...
	I0528 21:54:47.247043   69886 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:54:47.247085   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:47.247153   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:47.291560   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:47.291592   69886 cri.go:89] found id: ""
	I0528 21:54:47.291602   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:47.291667   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.296538   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:47.296597   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:47.335786   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:47.335809   69886 cri.go:89] found id: ""
	I0528 21:54:47.335817   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:47.335861   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.340222   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:47.340295   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:47.376487   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:47.376518   69886 cri.go:89] found id: ""
	I0528 21:54:47.376528   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:47.376587   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.380986   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:47.381043   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:47.419121   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:47.419144   69886 cri.go:89] found id: ""
	I0528 21:54:47.419151   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:47.419194   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.423323   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:47.423378   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:47.460781   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:47.460806   69886 cri.go:89] found id: ""
	I0528 21:54:47.460813   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:47.460856   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.465054   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:47.465107   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:47.510054   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:47.510077   69886 cri.go:89] found id: ""
	I0528 21:54:47.510085   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:47.510136   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.514707   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:47.514764   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:47.551564   69886 cri.go:89] found id: ""
	I0528 21:54:47.551587   69886 logs.go:276] 0 containers: []
	W0528 21:54:47.551594   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:47.551600   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:47.551647   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:47.591484   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:47.591506   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:47.591511   69886 cri.go:89] found id: ""
	I0528 21:54:47.591520   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:47.591581   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.596620   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.600861   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:47.600884   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:48.031181   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:48.031218   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:48.085321   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:48.085354   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:48.135504   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:48.135538   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:48.172440   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:48.172474   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:48.210817   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:48.210849   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:48.248170   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:48.248196   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:48.290905   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:48.290933   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:48.344302   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:48.344333   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:48.363912   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:48.363940   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:48.490794   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:48.490836   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:48.538412   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:48.538443   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:48.574693   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:48.574724   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:47.828383   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:47.828686   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:51.128492   69886 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0528 21:54:51.132736   69886 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0528 21:54:51.133908   69886 api_server.go:141] control plane version: v1.30.1
	I0528 21:54:51.133927   69886 api_server.go:131] duration metric: took 3.886877047s to wait for apiserver health ...
	I0528 21:54:51.133935   69886 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:54:51.133953   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:51.134009   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:51.174021   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:51.174042   69886 cri.go:89] found id: ""
	I0528 21:54:51.174049   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:51.174100   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.179416   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:51.179487   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:51.218954   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:51.218981   69886 cri.go:89] found id: ""
	I0528 21:54:51.218992   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:51.219055   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.224849   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:51.224920   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:51.265274   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:51.265304   69886 cri.go:89] found id: ""
	I0528 21:54:51.265314   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:51.265388   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.270027   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:51.270104   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:51.316234   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:51.316259   69886 cri.go:89] found id: ""
	I0528 21:54:51.316269   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:51.316324   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.320705   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:51.320771   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:51.358054   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:51.358079   69886 cri.go:89] found id: ""
	I0528 21:54:51.358089   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:51.358136   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.363687   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:51.363753   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:51.409441   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:51.409462   69886 cri.go:89] found id: ""
	I0528 21:54:51.409470   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:51.409517   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.414069   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:51.414125   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:51.454212   69886 cri.go:89] found id: ""
	I0528 21:54:51.454245   69886 logs.go:276] 0 containers: []
	W0528 21:54:51.454255   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:51.454263   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:51.454324   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:51.492146   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:51.492174   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:51.492181   69886 cri.go:89] found id: ""
	I0528 21:54:51.492190   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:51.492262   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.497116   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.501448   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:51.501469   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:51.871114   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:51.871151   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:51.918562   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:51.918590   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:52.031780   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:52.031819   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:52.090798   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:52.090827   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:52.131645   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:52.131673   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:52.191137   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:52.191172   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:52.241028   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:52.241054   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:52.276075   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:52.276115   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:52.328268   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:52.328307   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:52.342509   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:52.342542   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:52.390934   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:52.390980   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:52.429778   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:52.429809   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:54.975461   69886 system_pods.go:59] 8 kube-system pods found
	I0528 21:54:54.975495   69886 system_pods.go:61] "coredns-7db6d8ff4d-fmk2h" [a084dfb5-5818-4244-9052-a9f861b45617] Running
	I0528 21:54:54.975502   69886 system_pods.go:61] "etcd-no-preload-290122" [85b87ff0-50e4-4b23-b4dd-8442068b1af3] Running
	I0528 21:54:54.975508   69886 system_pods.go:61] "kube-apiserver-no-preload-290122" [2e2cecbc-1755-4cdc-8963-ab1e5b73e9f1] Running
	I0528 21:54:54.975514   69886 system_pods.go:61] "kube-controller-manager-no-preload-290122" [9a6218b4-fbd3-46ff-8eb5-a19f5ed9db72] Running
	I0528 21:54:54.975519   69886 system_pods.go:61] "kube-proxy-w45qh" [f962c73d-872d-4f78-a628-267cb0be49bb] Running
	I0528 21:54:54.975524   69886 system_pods.go:61] "kube-scheduler-no-preload-290122" [d191845d-d1ff-45d8-8601-b0395e2752f1] Running
	I0528 21:54:54.975532   69886 system_pods.go:61] "metrics-server-569cc877fc-j2khc" [2254e89c-3a61-4523-99a2-27ec92e73c9a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:54:54.975540   69886 system_pods.go:61] "storage-provisioner" [fc1a5463-05e0-4213-a7a8-2dd7f355ac36] Running
	I0528 21:54:54.975549   69886 system_pods.go:74] duration metric: took 3.841608486s to wait for pod list to return data ...
	I0528 21:54:54.975564   69886 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:54:54.977757   69886 default_sa.go:45] found service account: "default"
	I0528 21:54:54.977794   69886 default_sa.go:55] duration metric: took 2.222664ms for default service account to be created ...
	I0528 21:54:54.977803   69886 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:54:54.982505   69886 system_pods.go:86] 8 kube-system pods found
	I0528 21:54:54.982527   69886 system_pods.go:89] "coredns-7db6d8ff4d-fmk2h" [a084dfb5-5818-4244-9052-a9f861b45617] Running
	I0528 21:54:54.982532   69886 system_pods.go:89] "etcd-no-preload-290122" [85b87ff0-50e4-4b23-b4dd-8442068b1af3] Running
	I0528 21:54:54.982537   69886 system_pods.go:89] "kube-apiserver-no-preload-290122" [2e2cecbc-1755-4cdc-8963-ab1e5b73e9f1] Running
	I0528 21:54:54.982541   69886 system_pods.go:89] "kube-controller-manager-no-preload-290122" [9a6218b4-fbd3-46ff-8eb5-a19f5ed9db72] Running
	I0528 21:54:54.982545   69886 system_pods.go:89] "kube-proxy-w45qh" [f962c73d-872d-4f78-a628-267cb0be49bb] Running
	I0528 21:54:54.982549   69886 system_pods.go:89] "kube-scheduler-no-preload-290122" [d191845d-d1ff-45d8-8601-b0395e2752f1] Running
	I0528 21:54:54.982554   69886 system_pods.go:89] "metrics-server-569cc877fc-j2khc" [2254e89c-3a61-4523-99a2-27ec92e73c9a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:54:54.982559   69886 system_pods.go:89] "storage-provisioner" [fc1a5463-05e0-4213-a7a8-2dd7f355ac36] Running
	I0528 21:54:54.982565   69886 system_pods.go:126] duration metric: took 4.757682ms to wait for k8s-apps to be running ...
	I0528 21:54:54.982571   69886 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:54:54.982611   69886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:54:54.998318   69886 system_svc.go:56] duration metric: took 15.73926ms WaitForService to wait for kubelet
	I0528 21:54:54.998344   69886 kubeadm.go:576] duration metric: took 4m23.497907193s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:54:54.998364   69886 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:54:55.000709   69886 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:54:55.000726   69886 node_conditions.go:123] node cpu capacity is 2
	I0528 21:54:55.000737   69886 node_conditions.go:105] duration metric: took 2.368195ms to run NodePressure ...
	I0528 21:54:55.000747   69886 start.go:240] waiting for startup goroutines ...
	I0528 21:54:55.000754   69886 start.go:245] waiting for cluster config update ...
	I0528 21:54:55.000767   69886 start.go:254] writing updated cluster config ...
	I0528 21:54:55.001043   69886 ssh_runner.go:195] Run: rm -f paused
	I0528 21:54:55.049907   69886 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:54:55.051941   69886 out.go:177] * Done! kubectl is now configured to use "no-preload-290122" cluster and "default" namespace by default
	I0528 21:54:50.874003   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:53.946104   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:00.029992   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:03.098014   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:09.177976   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:12.250035   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:18.330105   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:21.402027   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:27.830110   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:55:27.830377   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:55:27.830409   70393 kubeadm.go:309] 
	I0528 21:55:27.830460   70393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:55:27.830496   70393 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:55:27.830504   70393 kubeadm.go:309] 
	I0528 21:55:27.830563   70393 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:55:27.830629   70393 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:55:27.830806   70393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:55:27.830833   70393 kubeadm.go:309] 
	I0528 21:55:27.830939   70393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:55:27.830970   70393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:55:27.830999   70393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:55:27.831006   70393 kubeadm.go:309] 
	I0528 21:55:27.831089   70393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:55:27.831161   70393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:55:27.831168   70393 kubeadm.go:309] 
	I0528 21:55:27.831276   70393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:55:27.831396   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:55:27.831491   70393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:55:27.831586   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:55:27.831597   70393 kubeadm.go:309] 
	I0528 21:55:27.832385   70393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:55:27.832478   70393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:55:27.832569   70393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0528 21:55:27.832707   70393 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0528 21:55:27.832768   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0528 21:55:28.286592   70393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:55:28.301095   70393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:55:28.310856   70393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:55:28.310875   70393 kubeadm.go:156] found existing configuration files:
	
	I0528 21:55:28.310916   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:55:28.319713   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:55:28.319757   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:55:28.328964   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:55:28.337404   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:55:28.337456   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:55:28.346480   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:55:28.355427   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:55:28.355475   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:55:28.364843   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:55:28.373821   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:55:28.373874   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:55:28.382542   70393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 21:55:28.448539   70393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0528 21:55:28.448744   70393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 21:55:28.592911   70393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 21:55:28.593029   70393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 21:55:28.593137   70393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 21:55:28.793805   70393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 21:55:28.795709   70393 out.go:204]   - Generating certificates and keys ...
	I0528 21:55:28.795786   70393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 21:55:28.795854   70393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 21:55:28.795959   70393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 21:55:28.796055   70393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0528 21:55:28.796153   70393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0528 21:55:28.796349   70393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0528 21:55:28.796467   70393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0528 21:55:28.796537   70393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0528 21:55:28.796610   70393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 21:55:28.796721   70393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 21:55:28.796768   70393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0528 21:55:28.796847   70393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 21:55:28.946885   70393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 21:55:29.128640   70393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 21:55:29.240490   70393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 21:55:29.542128   70393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 21:55:29.563784   70393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 21:55:29.565927   70393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 21:55:29.566159   70393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 21:55:29.711517   70393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 21:55:27.482003   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:30.554006   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:29.713311   70393 out.go:204]   - Booting up control plane ...
	I0528 21:55:29.713420   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 21:55:29.717970   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 21:55:29.718779   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 21:55:29.719429   70393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 21:55:29.722781   70393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0528 21:55:36.633958   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:39.710041   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:45.785968   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:48.861975   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:54.938007   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:58.014038   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:04.094039   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:07.162043   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:09.724902   70393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:56:09.725334   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:09.725557   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:13.241997   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:14.726408   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:14.726667   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:16.314032   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:22.394150   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:25.465982   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:24.727314   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:24.727592   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:31.546004   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:34.617980   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:40.697993   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:43.770044   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:44.728635   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:44.728954   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:49.853977   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:52.922083   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:59.001998   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:02.073983   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:08.157974   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:11.226001   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:17.305964   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:20.377963   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:24.729385   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:57:24.729659   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:57:24.729688   70393 kubeadm.go:309] 
	I0528 21:57:24.729745   70393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:57:24.729835   70393 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:57:24.729856   70393 kubeadm.go:309] 
	I0528 21:57:24.729898   70393 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:57:24.729930   70393 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:57:24.730023   70393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:57:24.730030   70393 kubeadm.go:309] 
	I0528 21:57:24.730156   70393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:57:24.730212   70393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:57:24.730267   70393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:57:24.730278   70393 kubeadm.go:309] 
	I0528 21:57:24.730403   70393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:57:24.730522   70393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:57:24.730533   70393 kubeadm.go:309] 
	I0528 21:57:24.730669   70393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:57:24.730788   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:57:24.730899   70393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:57:24.731020   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:57:24.731039   70393 kubeadm.go:309] 
	I0528 21:57:24.731657   70393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:57:24.731752   70393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:57:24.731861   70393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0528 21:57:24.731942   70393 kubeadm.go:393] duration metric: took 7m57.905523124s to StartCluster
	I0528 21:57:24.731997   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:57:24.732064   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:57:24.772889   70393 cri.go:89] found id: ""
	I0528 21:57:24.772916   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.772923   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:57:24.772929   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:57:24.772988   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:57:24.806418   70393 cri.go:89] found id: ""
	I0528 21:57:24.806447   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.806458   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:57:24.806467   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:57:24.806534   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:57:24.844994   70393 cri.go:89] found id: ""
	I0528 21:57:24.845020   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.845028   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:57:24.845035   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:57:24.845098   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:57:24.880517   70393 cri.go:89] found id: ""
	I0528 21:57:24.880547   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.880558   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:57:24.880566   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:57:24.880615   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:57:24.917534   70393 cri.go:89] found id: ""
	I0528 21:57:24.917561   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.917569   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:57:24.917575   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:57:24.917624   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:57:24.952898   70393 cri.go:89] found id: ""
	I0528 21:57:24.952929   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.952940   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:57:24.952948   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:57:24.953011   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:57:24.994957   70393 cri.go:89] found id: ""
	I0528 21:57:24.994983   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.994990   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:57:24.994996   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:57:24.995046   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:57:25.032594   70393 cri.go:89] found id: ""
	I0528 21:57:25.032617   70393 logs.go:276] 0 containers: []
	W0528 21:57:25.032624   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:57:25.032633   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:57:25.032645   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:57:25.112858   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:57:25.112882   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:57:25.112894   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:57:25.217748   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:57:25.217792   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:57:25.289998   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:57:25.290035   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:57:25.344833   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:57:25.344868   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0528 21:57:25.360547   70393 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0528 21:57:25.360594   70393 out.go:239] * 
	W0528 21:57:25.360659   70393 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:57:25.360693   70393 out.go:239] * 
	W0528 21:57:25.361545   70393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 21:57:25.365387   70393 out.go:177] 
	W0528 21:57:25.366681   70393 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:57:25.366731   70393 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0528 21:57:25.366772   70393 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0528 21:57:25.369011   70393 out.go:177] 
	I0528 21:57:26.462093   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:29.530040   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:35.610027   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:38.682076   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:44.762057   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:47.838109   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:53.914000   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:56.986078   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:03.066042   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:06.138002   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:12.218031   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:15.290043   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:18.290952   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:58:18.291006   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:18.291338   73188 buildroot.go:166] provisioning hostname "default-k8s-diff-port-249165"
	I0528 21:58:18.291363   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:18.291646   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:18.293181   73188 machine.go:97] duration metric: took 4m37.423637232s to provisionDockerMachine
	I0528 21:58:18.293224   73188 fix.go:56] duration metric: took 4m37.444947597s for fixHost
	I0528 21:58:18.293230   73188 start.go:83] releasing machines lock for "default-k8s-diff-port-249165", held for 4m37.444964638s
	W0528 21:58:18.293245   73188 start.go:713] error starting host: provision: host is not running
	W0528 21:58:18.293337   73188 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0528 21:58:18.293346   73188 start.go:728] Will try again in 5 seconds ...
	I0528 21:58:23.295554   73188 start.go:360] acquireMachinesLock for default-k8s-diff-port-249165: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:58:23.295664   73188 start.go:364] duration metric: took 68.737µs to acquireMachinesLock for "default-k8s-diff-port-249165"
	I0528 21:58:23.295686   73188 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:58:23.295692   73188 fix.go:54] fixHost starting: 
	I0528 21:58:23.296036   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:58:23.296059   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:58:23.310971   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0528 21:58:23.311354   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:58:23.311769   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:58:23.311791   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:58:23.312072   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:58:23.312279   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:23.312406   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 21:58:23.313815   73188 fix.go:112] recreateIfNeeded on default-k8s-diff-port-249165: state=Stopped err=<nil>
	I0528 21:58:23.313837   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	W0528 21:58:23.313981   73188 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:58:23.315867   73188 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-249165" ...
	I0528 21:58:23.317068   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Start
	I0528 21:58:23.317224   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Ensuring networks are active...
	I0528 21:58:23.317939   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Ensuring network default is active
	I0528 21:58:23.318317   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Ensuring network mk-default-k8s-diff-port-249165 is active
	I0528 21:58:23.318787   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Getting domain xml...
	I0528 21:58:23.319512   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Creating domain...
	I0528 21:58:24.556897   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting to get IP...
	I0528 21:58:24.557688   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.558217   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.558288   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:24.558188   74350 retry.go:31] will retry after 274.96624ms: waiting for machine to come up
	I0528 21:58:24.834950   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.835591   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.835621   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:24.835547   74350 retry.go:31] will retry after 271.693151ms: waiting for machine to come up
	I0528 21:58:25.109193   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.109736   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.109782   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:25.109675   74350 retry.go:31] will retry after 381.434148ms: waiting for machine to come up
	I0528 21:58:25.493383   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.493853   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.493880   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:25.493784   74350 retry.go:31] will retry after 384.034489ms: waiting for machine to come up
	I0528 21:58:25.879289   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.879822   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.879854   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:25.879749   74350 retry.go:31] will retry after 517.483073ms: waiting for machine to come up
	I0528 21:58:26.398450   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:26.399012   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:26.399089   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:26.399010   74350 retry.go:31] will retry after 757.371702ms: waiting for machine to come up
	I0528 21:58:27.157490   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:27.158014   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:27.158044   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:27.157971   74350 retry.go:31] will retry after 1.042611523s: waiting for machine to come up
	I0528 21:58:28.201704   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:28.202196   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:28.202229   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:28.202140   74350 retry.go:31] will retry after 1.287212665s: waiting for machine to come up
	I0528 21:58:29.490908   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:29.491356   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:29.491386   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:29.491287   74350 retry.go:31] will retry after 1.576442022s: waiting for machine to come up
	I0528 21:58:31.069493   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:31.069966   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:31.069995   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:31.069917   74350 retry.go:31] will retry after 2.245383669s: waiting for machine to come up
	I0528 21:58:33.317217   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:33.317670   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:33.317701   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:33.317608   74350 retry.go:31] will retry after 2.415705908s: waiting for machine to come up
	I0528 21:58:35.736148   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:35.736526   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:35.736549   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:35.736486   74350 retry.go:31] will retry after 3.463330934s: waiting for machine to come up
	I0528 21:58:39.201369   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:39.201852   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:39.201885   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:39.201819   74350 retry.go:31] will retry after 4.496481714s: waiting for machine to come up
	I0528 21:58:43.699313   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.699760   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Found IP for machine: 192.168.72.48
	I0528 21:58:43.699783   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Reserving static IP address...
	I0528 21:58:43.699801   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has current primary IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.700262   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Reserved static IP address: 192.168.72.48
	I0528 21:58:43.700280   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for SSH to be available...
	I0528 21:58:43.700295   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-249165", mac: "52:54:00:f4:fc:a4", ip: "192.168.72.48"} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.700339   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | skip adding static IP to network mk-default-k8s-diff-port-249165 - found existing host DHCP lease matching {name: "default-k8s-diff-port-249165", mac: "52:54:00:f4:fc:a4", ip: "192.168.72.48"}
	I0528 21:58:43.700362   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Getting to WaitForSSH function...
	I0528 21:58:43.702496   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.702910   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.702941   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.703104   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Using SSH client type: external
	I0528 21:58:43.703126   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa (-rw-------)
	I0528 21:58:43.703169   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 21:58:43.703185   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | About to run SSH command:
	I0528 21:58:43.703211   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | exit 0
	I0528 21:58:43.825921   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | SSH cmd err, output: <nil>: 
	I0528 21:58:43.826314   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetConfigRaw
	I0528 21:58:43.826989   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:43.829337   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.829663   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.829685   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.829993   73188 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/config.json ...
	I0528 21:58:43.830227   73188 machine.go:94] provisionDockerMachine start ...
	I0528 21:58:43.830259   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:43.830499   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:43.832840   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.833193   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.833222   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.833382   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:43.833551   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.833687   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.833820   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:43.833977   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:43.834147   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:43.834156   73188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:58:43.938159   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 21:58:43.938191   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:43.938426   73188 buildroot.go:166] provisioning hostname "default-k8s-diff-port-249165"
	I0528 21:58:43.938472   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:43.938684   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:43.941594   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.941986   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.942016   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.942195   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:43.942393   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.942550   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.942742   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:43.942913   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:43.943069   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:43.943082   73188 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-249165 && echo "default-k8s-diff-port-249165" | sudo tee /etc/hostname
	I0528 21:58:44.060923   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-249165
	
	I0528 21:58:44.060955   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.063621   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.063974   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.064008   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.064132   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.064326   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.064508   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.064660   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.064818   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:44.064999   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:44.065016   73188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-249165' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-249165/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-249165' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:58:44.174464   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:58:44.174491   73188 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:58:44.174524   73188 buildroot.go:174] setting up certificates
	I0528 21:58:44.174538   73188 provision.go:84] configureAuth start
	I0528 21:58:44.174549   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:44.174838   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:44.177623   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.178024   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.178052   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.178250   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.180956   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.181305   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.181334   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.181500   73188 provision.go:143] copyHostCerts
	I0528 21:58:44.181571   73188 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:58:44.181582   73188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:58:44.181643   73188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:58:44.181753   73188 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:58:44.181787   73188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:58:44.181819   73188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:58:44.181892   73188 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:58:44.181899   73188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:58:44.181920   73188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:58:44.181984   73188 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-249165 san=[127.0.0.1 192.168.72.48 default-k8s-diff-port-249165 localhost minikube]
	I0528 21:58:44.490074   73188 provision.go:177] copyRemoteCerts
	I0528 21:58:44.490127   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:58:44.490150   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.492735   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.493121   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.493156   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.493306   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.493526   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.493690   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.493845   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:44.575620   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 21:58:44.601185   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:58:44.625266   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0528 21:58:44.648243   73188 provision.go:87] duration metric: took 473.69068ms to configureAuth
	I0528 21:58:44.648271   73188 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:58:44.648430   73188 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:58:44.648502   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.651430   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.651793   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.651820   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.651960   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.652140   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.652277   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.652436   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.652592   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:44.652762   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:44.652777   73188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:58:44.923577   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:58:44.923597   73188 machine.go:97] duration metric: took 1.093358522s to provisionDockerMachine
	I0528 21:58:44.923607   73188 start.go:293] postStartSetup for "default-k8s-diff-port-249165" (driver="kvm2")
	I0528 21:58:44.923618   73188 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:58:44.923649   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:44.924030   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:58:44.924124   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.926704   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.927009   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.927038   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.927162   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.927347   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.927491   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.927627   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:45.009429   73188 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:58:45.014007   73188 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 21:58:45.014032   73188 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:58:45.014094   73188 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:58:45.014161   73188 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:58:45.014265   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:58:45.024039   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:58:45.050461   73188 start.go:296] duration metric: took 126.842658ms for postStartSetup
	I0528 21:58:45.050497   73188 fix.go:56] duration metric: took 21.754803931s for fixHost
	I0528 21:58:45.050519   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:45.053312   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.053639   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.053671   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.053821   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:45.054025   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.054198   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.054339   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:45.054475   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:45.054646   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:45.054657   73188 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 21:58:45.159430   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716933525.136417037
	
	I0528 21:58:45.159460   73188 fix.go:216] guest clock: 1716933525.136417037
	I0528 21:58:45.159470   73188 fix.go:229] Guest: 2024-05-28 21:58:45.136417037 +0000 UTC Remote: 2024-05-28 21:58:45.05050169 +0000 UTC m=+304.341994853 (delta=85.915347ms)
	I0528 21:58:45.159495   73188 fix.go:200] guest clock delta is within tolerance: 85.915347ms
	I0528 21:58:45.159502   73188 start.go:83] releasing machines lock for "default-k8s-diff-port-249165", held for 21.863825672s
	I0528 21:58:45.159552   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.159830   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:45.162709   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.163053   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.163089   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.163264   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.163717   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.163931   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.164028   73188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:58:45.164072   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:45.164139   73188 ssh_runner.go:195] Run: cat /version.json
	I0528 21:58:45.164164   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:45.167063   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167215   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167477   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.167505   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167534   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.167551   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167605   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:45.167811   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:45.167826   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.167992   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:45.167998   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.168132   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:45.168152   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:45.168279   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:45.243473   73188 ssh_runner.go:195] Run: systemctl --version
	I0528 21:58:45.275272   73188 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:58:45.416616   73188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 21:58:45.423144   73188 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:58:45.423203   73188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:58:45.438939   73188 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 21:58:45.438963   73188 start.go:494] detecting cgroup driver to use...
	I0528 21:58:45.439035   73188 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:58:45.454944   73188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:58:45.469976   73188 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:58:45.470031   73188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:58:45.484152   73188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:58:45.497541   73188 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:58:45.622055   73188 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:58:45.760388   73188 docker.go:233] disabling docker service ...
	I0528 21:58:45.760472   73188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:58:45.779947   73188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:58:45.794310   73188 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:58:45.926921   73188 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:58:46.042042   73188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:58:46.055486   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:58:46.074285   73188 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 21:58:46.074347   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.084646   73188 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:58:46.084709   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.094701   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.104877   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.115549   73188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:58:46.125973   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.136293   73188 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.153570   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.165428   73188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:58:46.175167   73188 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 21:58:46.175224   73188 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 21:58:46.189687   73188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:58:46.199630   73188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:58:46.322596   73188 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:58:46.465841   73188 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:58:46.465905   73188 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:58:46.471249   73188 start.go:562] Will wait 60s for crictl version
	I0528 21:58:46.471301   73188 ssh_runner.go:195] Run: which crictl
	I0528 21:58:46.474963   73188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:58:46.514028   73188 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 21:58:46.514111   73188 ssh_runner.go:195] Run: crio --version
	I0528 21:58:46.544060   73188 ssh_runner.go:195] Run: crio --version
	I0528 21:58:46.577448   73188 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 21:58:46.578815   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:46.581500   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:46.581876   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:46.581918   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:46.582081   73188 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0528 21:58:46.586277   73188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:58:46.599163   73188 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:58:46.599265   73188 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:58:46.599308   73188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:58:46.636824   73188 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0528 21:58:46.636895   73188 ssh_runner.go:195] Run: which lz4
	I0528 21:58:46.640890   73188 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 21:58:46.645433   73188 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 21:58:46.645457   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0528 21:58:48.069572   73188 crio.go:462] duration metric: took 1.428706508s to copy over tarball
	I0528 21:58:48.069660   73188 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 21:58:50.289428   73188 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.2197347s)
	I0528 21:58:50.289459   73188 crio.go:469] duration metric: took 2.219854472s to extract the tarball
	I0528 21:58:50.289466   73188 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 21:58:50.329649   73188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:58:50.373900   73188 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:58:50.373922   73188 cache_images.go:84] Images are preloaded, skipping loading
	I0528 21:58:50.373928   73188 kubeadm.go:928] updating node { 192.168.72.48 8444 v1.30.1 crio true true} ...
	I0528 21:58:50.374059   73188 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-249165 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:58:50.374142   73188 ssh_runner.go:195] Run: crio config
	I0528 21:58:50.430538   73188 cni.go:84] Creating CNI manager for ""
	I0528 21:58:50.430573   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:58:50.430590   73188 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:58:50.430618   73188 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.48 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-249165 NodeName:default-k8s-diff-port-249165 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 21:58:50.430754   73188 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.48
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-249165"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:58:50.430822   73188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 21:58:50.440906   73188 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:58:50.440961   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:58:50.450354   73188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0528 21:58:50.467008   73188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:58:50.483452   73188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0528 21:58:50.500551   73188 ssh_runner.go:195] Run: grep 192.168.72.48	control-plane.minikube.internal$ /etc/hosts
	I0528 21:58:50.504597   73188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:58:50.516659   73188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:58:50.634433   73188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:58:50.651819   73188 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165 for IP: 192.168.72.48
	I0528 21:58:50.651844   73188 certs.go:194] generating shared ca certs ...
	I0528 21:58:50.651868   73188 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:58:50.652040   73188 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 21:58:50.652109   73188 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 21:58:50.652124   73188 certs.go:256] generating profile certs ...
	I0528 21:58:50.652223   73188 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/client.key
	I0528 21:58:50.652298   73188 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/apiserver.key.3e2f4fca
	I0528 21:58:50.652351   73188 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/proxy-client.key
	I0528 21:58:50.652505   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 21:58:50.652546   73188 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 21:58:50.652558   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 21:58:50.652589   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 21:58:50.652617   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:58:50.652645   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 21:58:50.652687   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:58:50.653356   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:58:50.687329   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:58:50.731844   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:58:50.758921   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:58:50.793162   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0528 21:58:50.820772   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 21:58:50.849830   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:58:50.875695   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 21:58:50.900876   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 21:58:50.925424   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:58:50.949453   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 21:58:50.973597   73188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:58:50.990297   73188 ssh_runner.go:195] Run: openssl version
	I0528 21:58:50.996164   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 21:58:51.007959   73188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 21:58:51.012987   73188 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:58:51.013062   73188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 21:58:51.019526   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 21:58:51.031068   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 21:58:51.043064   73188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 21:58:51.048507   73188 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:58:51.048600   73188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 21:58:51.054818   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 21:58:51.065829   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:58:51.076414   73188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:58:51.081090   73188 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:58:51.081141   73188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:58:51.086736   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:58:51.096968   73188 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:58:51.101288   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 21:58:51.107082   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 21:58:51.112759   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 21:58:51.118504   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 21:58:51.124067   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 21:58:51.129783   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 21:58:51.135390   73188 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:58:51.135521   73188 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:58:51.135583   73188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:58:51.173919   73188 cri.go:89] found id: ""
	I0528 21:58:51.173995   73188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0528 21:58:51.184361   73188 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 21:58:51.184381   73188 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 21:58:51.184386   73188 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 21:58:51.184424   73188 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 21:58:51.194386   73188 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:58:51.195726   73188 kubeconfig.go:125] found "default-k8s-diff-port-249165" server: "https://192.168.72.48:8444"
	I0528 21:58:51.198799   73188 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 21:58:51.208118   73188 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.48
	I0528 21:58:51.208146   73188 kubeadm.go:1154] stopping kube-system containers ...
	I0528 21:58:51.208157   73188 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0528 21:58:51.208193   73188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:58:51.252026   73188 cri.go:89] found id: ""
	I0528 21:58:51.252089   73188 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 21:58:51.269404   73188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:58:51.279728   73188 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:58:51.279744   73188 kubeadm.go:156] found existing configuration files:
	
	I0528 21:58:51.279790   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0528 21:58:51.289352   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:58:51.289396   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:58:51.299059   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0528 21:58:51.308375   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:58:51.308425   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:58:51.317866   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0528 21:58:51.327433   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:58:51.327488   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:58:51.337148   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0528 21:58:51.346358   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:58:51.346410   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:58:51.355689   73188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:58:51.365235   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:51.488772   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.553360   73188 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.064544437s)
	I0528 21:58:52.553398   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.780281   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.839188   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.914117   73188 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:58:52.914222   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:58:53.415170   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:58:53.914987   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:58:53.933842   73188 api_server.go:72] duration metric: took 1.019725255s to wait for apiserver process to appear ...
	I0528 21:58:53.933869   73188 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:58:53.933886   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:53.934358   73188 api_server.go:269] stopped: https://192.168.72.48:8444/healthz: Get "https://192.168.72.48:8444/healthz": dial tcp 192.168.72.48:8444: connect: connection refused
	I0528 21:58:54.434146   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:56.813345   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:58:56.813384   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:58:56.813396   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:56.821906   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:58:56.821935   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:58:56.934069   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:56.941002   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:56.941034   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:57.434777   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:57.439312   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:57.439345   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:57.934912   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:57.941171   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:57.941201   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:58.434198   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:58.438164   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:58.438190   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:58.934813   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:58.939873   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:58.939899   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:59.434373   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:59.438639   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:59.438662   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:59.934909   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:59.940297   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:59.940331   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:59:00.434920   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:59:00.440734   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 200:
	ok
	I0528 21:59:00.447107   73188 api_server.go:141] control plane version: v1.30.1
	I0528 21:59:00.447129   73188 api_server.go:131] duration metric: took 6.513254325s to wait for apiserver health ...
	I0528 21:59:00.447137   73188 cni.go:84] Creating CNI manager for ""
	I0528 21:59:00.447143   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:59:00.449008   73188 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 21:59:00.450184   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 21:59:00.461520   73188 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 21:59:00.480494   73188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:59:00.491722   73188 system_pods.go:59] 8 kube-system pods found
	I0528 21:59:00.491755   73188 system_pods.go:61] "coredns-7db6d8ff4d-qk6tz" [d3250a5a-2eda-41d3-86e2-227e85da8cb6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 21:59:00.491764   73188 system_pods.go:61] "etcd-default-k8s-diff-port-249165" [e1179b11-47b9-4803-91bb-a8d8470aac40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 21:59:00.491771   73188 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-249165" [7f6c0680-8827-4f15-90e5-f8d9e1d1bc8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 21:59:00.491780   73188 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-249165" [4d6f8bb3-0f4b-41fa-9b02-3b2c79513bf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 21:59:00.491786   73188 system_pods.go:61] "kube-proxy-fvmjv" [df55e25a-a79a-4293-9636-31f5ebc4fc77] Running
	I0528 21:59:00.491791   73188 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-249165" [82200561-6687-448d-b73f-d0e047dec773] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 21:59:00.491797   73188 system_pods.go:61] "metrics-server-569cc877fc-k2q4p" [d1ec23de-6293-42a8-80f3-e28e007b6a34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:59:00.491802   73188 system_pods.go:61] "storage-provisioner" [1f84dc9c-6b4e-44c9-82a2-5dabcb0b2178] Running
	I0528 21:59:00.491808   73188 system_pods.go:74] duration metric: took 11.287283ms to wait for pod list to return data ...
	I0528 21:59:00.491817   73188 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:59:00.495098   73188 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:59:00.495124   73188 node_conditions.go:123] node cpu capacity is 2
	I0528 21:59:00.495135   73188 node_conditions.go:105] duration metric: took 3.313626ms to run NodePressure ...
	I0528 21:59:00.495151   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:59:00.782161   73188 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0528 21:59:00.786287   73188 kubeadm.go:733] kubelet initialised
	I0528 21:59:00.786308   73188 kubeadm.go:734] duration metric: took 4.112496ms waiting for restarted kubelet to initialise ...
	I0528 21:59:00.786316   73188 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:59:00.790951   73188 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.795459   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.795486   73188 pod_ready.go:81] duration metric: took 4.510715ms for pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.795496   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.795505   73188 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.799372   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.799395   73188 pod_ready.go:81] duration metric: took 3.878119ms for pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.799405   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.799412   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.803708   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.803732   73188 pod_ready.go:81] duration metric: took 4.312817ms for pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.803744   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.803752   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.883526   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.883552   73188 pod_ready.go:81] duration metric: took 79.787719ms for pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.883562   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.883569   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fvmjv" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:01.284553   73188 pod_ready.go:92] pod "kube-proxy-fvmjv" in "kube-system" namespace has status "Ready":"True"
	I0528 21:59:01.284580   73188 pod_ready.go:81] duration metric: took 401.003384ms for pod "kube-proxy-fvmjv" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:01.284590   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:03.293222   73188 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:04.291145   73188 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"True"
	I0528 21:59:04.291171   73188 pod_ready.go:81] duration metric: took 3.006571778s for pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:04.291183   73188 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:06.297256   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:08.299092   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:10.797261   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:12.797546   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:15.297532   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:17.297769   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:19.298152   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:21.797794   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:24.298073   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:26.797503   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:29.297699   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:31.298091   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:33.799278   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:36.298358   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:38.298659   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:40.797501   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:43.297098   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:45.297322   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:47.798004   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:49.798749   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:52.296950   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:54.297779   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:56.297921   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:58.797953   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:01.297566   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:03.302555   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:05.797610   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:07.797893   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:09.798237   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:12.297953   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:14.298232   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:16.798660   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:19.296867   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:21.297325   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:23.797687   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:26.298657   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:28.798073   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:31.299219   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:33.800018   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:36.297914   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:38.297984   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:40.796919   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:42.798156   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:44.800231   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:47.297425   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:49.800316   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:52.297415   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:54.297549   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:56.798787   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:59.297851   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:01.298008   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:03.298732   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:05.797817   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:07.797913   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:10.297286   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:12.797866   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:14.799144   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:17.297592   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:19.298065   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:21.797973   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:23.798794   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:26.298087   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:28.300587   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:30.797976   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:33.297574   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:35.298403   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:37.797436   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:40.300414   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:42.797172   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:45.297340   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:47.297684   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:49.298815   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:51.299597   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:53.798447   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:56.297483   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:58.298264   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:00.798507   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:03.297276   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:05.299518   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:07.799770   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:10.300402   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:12.796971   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:14.798057   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:16.798315   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:18.800481   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:21.298816   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:23.797133   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:25.798165   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:28.297030   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:30.797031   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:32.797960   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:34.798334   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:37.298013   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:39.797122   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:42.297054   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:44.297976   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:46.797135   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:48.797338   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:50.797608   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:53.299621   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:55.797973   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:57.798174   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:00.298537   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:02.796804   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:04.291841   73188 pod_ready.go:81] duration metric: took 4m0.000641837s for pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace to be "Ready" ...
	E0528 22:03:04.291876   73188 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0528 22:03:04.291893   73188 pod_ready.go:38] duration metric: took 4m3.505569148s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:03:04.291917   73188 kubeadm.go:591] duration metric: took 4m13.107527237s to restartPrimaryControlPlane
	W0528 22:03:04.291969   73188 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0528 22:03:04.291999   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0528 22:03:35.997887   73188 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.705862339s)
	I0528 22:03:35.997980   73188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 22:03:36.013927   73188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 22:03:36.023856   73188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 22:03:36.033329   73188 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 22:03:36.033349   73188 kubeadm.go:156] found existing configuration files:
	
	I0528 22:03:36.033385   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0528 22:03:36.042504   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 22:03:36.042555   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 22:03:36.051990   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0528 22:03:36.061602   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 22:03:36.061672   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 22:03:36.071582   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0528 22:03:36.081217   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 22:03:36.081289   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 22:03:36.091380   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0528 22:03:36.101427   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 22:03:36.101491   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 22:03:36.111166   73188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 22:03:36.167427   73188 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 22:03:36.167584   73188 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 22:03:36.319657   73188 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 22:03:36.319762   73188 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 22:03:36.319861   73188 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 22:03:36.570417   73188 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 22:03:36.572409   73188 out.go:204]   - Generating certificates and keys ...
	I0528 22:03:36.572503   73188 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 22:03:36.572615   73188 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 22:03:36.572723   73188 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 22:03:36.572801   73188 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0528 22:03:36.572895   73188 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0528 22:03:36.572944   73188 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0528 22:03:36.572999   73188 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0528 22:03:36.573087   73188 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0528 22:03:36.573192   73188 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 22:03:36.573348   73188 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 22:03:36.573818   73188 kubeadm.go:309] [certs] Using the existing "sa" key
	I0528 22:03:36.573889   73188 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 22:03:36.671532   73188 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 22:03:36.741211   73188 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 22:03:36.908326   73188 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 22:03:37.058636   73188 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 22:03:37.237907   73188 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 22:03:37.238660   73188 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 22:03:37.242660   73188 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 22:03:37.244632   73188 out.go:204]   - Booting up control plane ...
	I0528 22:03:37.244721   73188 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 22:03:37.244790   73188 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 22:03:37.244999   73188 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 22:03:37.267448   73188 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 22:03:37.268482   73188 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 22:03:37.268550   73188 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 22:03:37.405936   73188 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 22:03:37.406050   73188 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 22:03:37.907833   73188 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.378139ms
	I0528 22:03:37.907936   73188 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 22:03:42.910213   73188 kubeadm.go:309] [api-check] The API server is healthy after 5.00224578s
	I0528 22:03:42.926650   73188 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 22:03:42.943917   73188 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 22:03:42.972044   73188 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 22:03:42.972264   73188 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-249165 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 22:03:42.986882   73188 kubeadm.go:309] [bootstrap-token] Using token: cf4624.vgyi0c4jykmr5x8u
	I0528 22:03:42.988295   73188 out.go:204]   - Configuring RBAC rules ...
	I0528 22:03:42.988438   73188 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 22:03:42.994583   73188 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 22:03:43.003191   73188 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 22:03:43.007110   73188 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 22:03:43.014038   73188 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 22:03:43.022358   73188 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 22:03:43.322836   73188 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 22:03:43.790286   73188 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 22:03:44.317555   73188 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 22:03:44.318811   73188 kubeadm.go:309] 
	I0528 22:03:44.318906   73188 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 22:03:44.318933   73188 kubeadm.go:309] 
	I0528 22:03:44.319041   73188 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 22:03:44.319052   73188 kubeadm.go:309] 
	I0528 22:03:44.319073   73188 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 22:03:44.319128   73188 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 22:03:44.319171   73188 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 22:03:44.319178   73188 kubeadm.go:309] 
	I0528 22:03:44.319333   73188 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 22:03:44.319349   73188 kubeadm.go:309] 
	I0528 22:03:44.319390   73188 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 22:03:44.319395   73188 kubeadm.go:309] 
	I0528 22:03:44.319437   73188 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 22:03:44.319501   73188 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 22:03:44.319597   73188 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 22:03:44.319617   73188 kubeadm.go:309] 
	I0528 22:03:44.319758   73188 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 22:03:44.319881   73188 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 22:03:44.319894   73188 kubeadm.go:309] 
	I0528 22:03:44.320006   73188 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token cf4624.vgyi0c4jykmr5x8u \
	I0528 22:03:44.320098   73188 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb \
	I0528 22:03:44.320118   73188 kubeadm.go:309] 	--control-plane 
	I0528 22:03:44.320125   73188 kubeadm.go:309] 
	I0528 22:03:44.320201   73188 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 22:03:44.320209   73188 kubeadm.go:309] 
	I0528 22:03:44.320284   73188 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token cf4624.vgyi0c4jykmr5x8u \
	I0528 22:03:44.320405   73188 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb 
	I0528 22:03:44.320885   73188 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 22:03:44.320929   73188 cni.go:84] Creating CNI manager for ""
	I0528 22:03:44.320945   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 22:03:44.322688   73188 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 22:03:44.323999   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 22:03:44.335532   73188 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 22:03:44.356272   73188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 22:03:44.356380   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:44.356387   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-249165 minikube.k8s.io/updated_at=2024_05_28T22_03_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=default-k8s-diff-port-249165 minikube.k8s.io/primary=true
	I0528 22:03:44.384624   73188 ops.go:34] apiserver oom_adj: -16
	I0528 22:03:44.563265   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:45.063599   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:45.563789   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:46.063279   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:46.564010   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:47.063573   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:47.563386   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:48.064282   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:48.563854   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:49.063459   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:49.564059   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:50.064286   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:50.564237   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:51.063435   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:51.563256   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:52.063661   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:52.563554   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:53.063681   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:53.563368   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:54.063863   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:54.563426   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:55.063793   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:55.564268   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:56.063717   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:56.563689   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:56.664824   73188 kubeadm.go:1107] duration metric: took 12.308506231s to wait for elevateKubeSystemPrivileges
	W0528 22:03:56.664873   73188 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 22:03:56.664885   73188 kubeadm.go:393] duration metric: took 5m5.529497247s to StartCluster
	I0528 22:03:56.664908   73188 settings.go:142] acquiring lock: {Name:mkc1182212d780ab38539839372c25eb989b2e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:03:56.664987   73188 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 22:03:56.667020   73188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:03:56.667272   73188 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 22:03:56.669019   73188 out.go:177] * Verifying Kubernetes components...
	I0528 22:03:56.667382   73188 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 22:03:56.667455   73188 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:03:56.672619   73188 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-249165"
	I0528 22:03:56.672634   73188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:03:56.672634   73188 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-249165"
	I0528 22:03:56.672659   73188 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-249165"
	I0528 22:03:56.672665   73188 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-249165"
	W0528 22:03:56.672671   73188 addons.go:243] addon storage-provisioner should already be in state true
	W0528 22:03:56.672673   73188 addons.go:243] addon metrics-server should already be in state true
	I0528 22:03:56.672696   73188 host.go:66] Checking if "default-k8s-diff-port-249165" exists ...
	I0528 22:03:56.672699   73188 host.go:66] Checking if "default-k8s-diff-port-249165" exists ...
	I0528 22:03:56.672625   73188 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-249165"
	I0528 22:03:56.672741   73188 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-249165"
	I0528 22:03:56.672973   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.672993   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.673010   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.673026   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.673163   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.673194   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.689257   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38053
	I0528 22:03:56.689499   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I0528 22:03:56.689836   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.689955   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.690383   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.690403   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.690538   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.690555   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.690738   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.690899   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.691287   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.691323   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.691754   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.691785   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.692291   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32897
	I0528 22:03:56.692685   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.693220   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.693245   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.693626   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.693856   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 22:03:56.697987   73188 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-249165"
	W0528 22:03:56.698008   73188 addons.go:243] addon default-storageclass should already be in state true
	I0528 22:03:56.698037   73188 host.go:66] Checking if "default-k8s-diff-port-249165" exists ...
	I0528 22:03:56.698396   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.698440   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.707841   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I0528 22:03:56.708297   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.710004   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.710031   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.710055   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0528 22:03:56.710537   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.710741   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 22:03:56.710818   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.711308   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.711333   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.711655   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.711830   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 22:03:56.713789   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 22:03:56.716114   73188 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:03:56.714205   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 22:03:56.717642   73188 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:03:56.717661   73188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 22:03:56.717682   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 22:03:56.719665   73188 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0528 22:03:56.720996   73188 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0528 22:03:56.721011   73188 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0528 22:03:56.721026   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 22:03:56.720668   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 22:03:56.721097   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 22:03:56.721113   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 22:03:56.721212   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 22:03:56.721387   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 22:03:56.721521   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 22:03:56.721654   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 22:03:56.724508   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 22:03:56.724964   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 22:03:56.725036   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 22:03:56.725075   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0528 22:03:56.725301   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 22:03:56.725445   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.725458   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 22:03:56.725595   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 22:03:56.725728   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 22:03:56.725960   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.725976   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.726329   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.726874   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.726907   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.742977   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35273
	I0528 22:03:56.743565   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.744141   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.744156   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.744585   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.744742   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 22:03:56.746660   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 22:03:56.746937   73188 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 22:03:56.746953   73188 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 22:03:56.746975   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 22:03:56.749996   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 22:03:56.750477   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 22:03:56.750505   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 22:03:56.750680   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 22:03:56.750834   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 22:03:56.750977   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 22:03:56.751108   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 22:03:56.917578   73188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:03:56.948739   73188 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-249165" to be "Ready" ...
	I0528 22:03:56.960279   73188 node_ready.go:49] node "default-k8s-diff-port-249165" has status "Ready":"True"
	I0528 22:03:56.960331   73188 node_ready.go:38] duration metric: took 11.549106ms for node "default-k8s-diff-port-249165" to be "Ready" ...
	I0528 22:03:56.960343   73188 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:03:56.967728   73188 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.973605   73188 pod_ready.go:92] pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"True"
	I0528 22:03:56.973626   73188 pod_ready.go:81] duration metric: took 5.846822ms for pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.973637   73188 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.978965   73188 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"True"
	I0528 22:03:56.978991   73188 pod_ready.go:81] duration metric: took 5.346348ms for pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.979003   73188 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.992525   73188 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"True"
	I0528 22:03:56.992553   73188 pod_ready.go:81] duration metric: took 13.54102ms for pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.992565   73188 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.999982   73188 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"True"
	I0528 22:03:57.000004   73188 pod_ready.go:81] duration metric: took 7.430535ms for pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:57.000012   73188 pod_ready.go:38] duration metric: took 39.659784ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:03:57.000025   73188 api_server.go:52] waiting for apiserver process to appear ...
	I0528 22:03:57.000081   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:03:57.005838   73188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0528 22:03:57.005866   73188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0528 22:03:57.024072   73188 api_server.go:72] duration metric: took 356.761134ms to wait for apiserver process to appear ...
	I0528 22:03:57.024093   73188 api_server.go:88] waiting for apiserver healthz status ...
	I0528 22:03:57.024110   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 22:03:57.032258   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 200:
	ok
	I0528 22:03:57.033413   73188 api_server.go:141] control plane version: v1.30.1
	I0528 22:03:57.033434   73188 api_server.go:131] duration metric: took 9.333959ms to wait for apiserver health ...
	I0528 22:03:57.033444   73188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 22:03:57.046727   73188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0528 22:03:57.046750   73188 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0528 22:03:57.105303   73188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:03:57.105327   73188 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0528 22:03:57.123417   73188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:03:57.158565   73188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:03:57.178241   73188 system_pods.go:59] 5 kube-system pods found
	I0528 22:03:57.178282   73188 system_pods.go:61] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:57.178289   73188 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:57.178295   73188 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:57.178304   73188 system_pods.go:61] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 22:03:57.178363   73188 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:57.178378   73188 system_pods.go:74] duration metric: took 144.927386ms to wait for pod list to return data ...
	I0528 22:03:57.178389   73188 default_sa.go:34] waiting for default service account to be created ...
	I0528 22:03:57.202680   73188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:03:57.370886   73188 default_sa.go:45] found service account: "default"
	I0528 22:03:57.370917   73188 default_sa.go:55] duration metric: took 192.512428ms for default service account to be created ...
	I0528 22:03:57.370928   73188 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 22:03:57.627455   73188 system_pods.go:86] 7 kube-system pods found
	I0528 22:03:57.627489   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Pending
	I0528 22:03:57.627497   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Pending
	I0528 22:03:57.627504   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:57.627511   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:57.627518   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:57.627528   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 22:03:57.627535   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:57.627559   73188 retry.go:31] will retry after 254.633885ms: missing components: kube-dns, kube-proxy
	I0528 22:03:57.888116   73188 system_pods.go:86] 7 kube-system pods found
	I0528 22:03:57.888151   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:57.888163   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:57.888170   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:57.888178   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:57.888184   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:57.888194   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 22:03:57.888201   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:57.888223   73188 retry.go:31] will retry after 268.738305ms: missing components: kube-dns, kube-proxy
	I0528 22:03:58.043325   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.043356   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.043650   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.043674   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.043693   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.043707   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.043949   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Closing plugin on server side
	I0528 22:03:58.044008   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.044028   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.049206   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.049225   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.049473   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Closing plugin on server side
	I0528 22:03:58.049518   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.049528   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.049540   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.049550   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.049785   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.049801   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.065546   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.065567   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.065857   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Closing plugin on server side
	I0528 22:03:58.065884   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.065898   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.169017   73188 system_pods.go:86] 8 kube-system pods found
	I0528 22:03:58.169047   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:58.169054   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:58.169062   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:58.169070   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:58.169077   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:58.169085   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 22:03:58.169091   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:58.169101   73188 system_pods.go:89] "storage-provisioner" [be3fd3ac-6795-4168-bd94-007932dcbb2c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 22:03:58.169119   73188 retry.go:31] will retry after 296.463415ms: missing components: kube-dns, kube-proxy
	I0528 22:03:58.348570   73188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.145845195s)
	I0528 22:03:58.348628   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.348646   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.348982   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.348993   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Closing plugin on server side
	I0528 22:03:58.349011   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.349022   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.349030   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.349262   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.349277   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.349288   73188 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-249165"
	I0528 22:03:58.351022   73188 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0528 22:03:58.352295   73188 addons.go:510] duration metric: took 1.684913905s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0528 22:03:58.475437   73188 system_pods.go:86] 9 kube-system pods found
	I0528 22:03:58.475469   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:58.475477   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:58.475485   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:58.475491   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:58.475495   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:58.475500   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 22:03:58.475505   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:58.475511   73188 system_pods.go:89] "metrics-server-569cc877fc-6q6pz" [443b12f9-e99d-4bb7-ae3f-8a25ed277f44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 22:03:58.475523   73188 system_pods.go:89] "storage-provisioner" [be3fd3ac-6795-4168-bd94-007932dcbb2c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 22:03:58.475539   73188 retry.go:31] will retry after 570.589575ms: missing components: kube-dns, kube-proxy
	I0528 22:03:59.056553   73188 system_pods.go:86] 9 kube-system pods found
	I0528 22:03:59.056585   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:59.056608   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:59.056615   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:59.056621   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:59.056625   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:59.056630   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Running
	I0528 22:03:59.056635   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:59.056641   73188 system_pods.go:89] "metrics-server-569cc877fc-6q6pz" [443b12f9-e99d-4bb7-ae3f-8a25ed277f44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 22:03:59.056648   73188 system_pods.go:89] "storage-provisioner" [be3fd3ac-6795-4168-bd94-007932dcbb2c] Running
	I0528 22:03:59.056662   73188 retry.go:31] will retry after 524.559216ms: missing components: kube-dns
	I0528 22:03:59.587811   73188 system_pods.go:86] 9 kube-system pods found
	I0528 22:03:59.587841   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:59.587849   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:59.587856   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:59.587862   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:59.587866   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:59.587870   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Running
	I0528 22:03:59.587874   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:59.587880   73188 system_pods.go:89] "metrics-server-569cc877fc-6q6pz" [443b12f9-e99d-4bb7-ae3f-8a25ed277f44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 22:03:59.587884   73188 system_pods.go:89] "storage-provisioner" [be3fd3ac-6795-4168-bd94-007932dcbb2c] Running
	I0528 22:03:59.587897   73188 retry.go:31] will retry after 629.323845ms: missing components: kube-dns
	I0528 22:04:00.227627   73188 system_pods.go:86] 9 kube-system pods found
	I0528 22:04:00.227659   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Running
	I0528 22:04:00.227664   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Running
	I0528 22:04:00.227669   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:04:00.227674   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:04:00.227679   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:04:00.227683   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Running
	I0528 22:04:00.227687   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:04:00.227694   73188 system_pods.go:89] "metrics-server-569cc877fc-6q6pz" [443b12f9-e99d-4bb7-ae3f-8a25ed277f44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 22:04:00.227698   73188 system_pods.go:89] "storage-provisioner" [be3fd3ac-6795-4168-bd94-007932dcbb2c] Running
	I0528 22:04:00.227709   73188 system_pods.go:126] duration metric: took 2.856773755s to wait for k8s-apps to be running ...
	I0528 22:04:00.227719   73188 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 22:04:00.227759   73188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 22:04:00.245865   73188 system_svc.go:56] duration metric: took 18.136353ms WaitForService to wait for kubelet
	I0528 22:04:00.245901   73188 kubeadm.go:576] duration metric: took 3.578592994s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 22:04:00.245927   73188 node_conditions.go:102] verifying NodePressure condition ...
	I0528 22:04:00.248867   73188 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 22:04:00.248891   73188 node_conditions.go:123] node cpu capacity is 2
	I0528 22:04:00.248906   73188 node_conditions.go:105] duration metric: took 2.971728ms to run NodePressure ...
	I0528 22:04:00.248923   73188 start.go:240] waiting for startup goroutines ...
	I0528 22:04:00.248934   73188 start.go:245] waiting for cluster config update ...
	I0528 22:04:00.248951   73188 start.go:254] writing updated cluster config ...
	I0528 22:04:00.249278   73188 ssh_runner.go:195] Run: rm -f paused
	I0528 22:04:00.297365   73188 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 22:04:00.299141   73188 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-249165" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.758901356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933987758875473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=693957d2-de24-4517-87b7-9c6abeed10e7 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.759641501Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=604c052b-8a49-487c-b8d4-2d3726a6ae83 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.759727901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=604c052b-8a49-487c-b8d4-2d3726a6ae83 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.759768313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=604c052b-8a49-487c-b8d4-2d3726a6ae83 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.793396923Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95f6f7dd-e3ef-4ea6-b155-290c1e5c4973 name=/runtime.v1.RuntimeService/Version
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.793481190Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95f6f7dd-e3ef-4ea6-b155-290c1e5c4973 name=/runtime.v1.RuntimeService/Version
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.794776678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e092c44c-7b10-41d1-804e-67e66b26b92d name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.795239236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933987795161633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e092c44c-7b10-41d1-804e-67e66b26b92d name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.795811461Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9bc6b0d-7447-4796-ad6d-b121c21dd192 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.795866886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9bc6b0d-7447-4796-ad6d-b121c21dd192 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.795903836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a9bc6b0d-7447-4796-ad6d-b121c21dd192 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.833413000Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51b66b94-9796-430c-b91a-c680cd411036 name=/runtime.v1.RuntimeService/Version
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.833519288Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51b66b94-9796-430c-b91a-c680cd411036 name=/runtime.v1.RuntimeService/Version
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.835056056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5da458dd-f9c5-4d3e-bcfd-9b4aa3a90fed name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.835687877Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933987835649635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5da458dd-f9c5-4d3e-bcfd-9b4aa3a90fed name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.836502278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae42690d-8a84-4ea2-90d2-9800b56310e3 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.836582567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae42690d-8a84-4ea2-90d2-9800b56310e3 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.836638263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ae42690d-8a84-4ea2-90d2-9800b56310e3 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.870018802Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d369778-c718-4ab2-bed1-49955389905e name=/runtime.v1.RuntimeService/Version
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.870097118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d369778-c718-4ab2-bed1-49955389905e name=/runtime.v1.RuntimeService/Version
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.871334637Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=388fb64b-ccb8-4b4e-9560-0123b5ee65a5 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.871722695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716933987871701781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=388fb64b-ccb8-4b4e-9560-0123b5ee65a5 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.872344816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6de12aa3-a822-4c86-ad6d-7a67ca09f340 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.872398969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6de12aa3-a822-4c86-ad6d-7a67ca09f340 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:06:27 old-k8s-version-499466 crio[643]: time="2024-05-28 22:06:27.872432085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6de12aa3-a822-4c86-ad6d-7a67ca09f340 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May28 21:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.059723] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041122] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.612680] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.319990] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.591576] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.302597] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.059124] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058807] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.173273] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.170028] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.245355] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +6.602395] systemd-fstab-generator[831]: Ignoring "noauto" option for root device
	[  +0.061119] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.883111] systemd-fstab-generator[957]: Ignoring "noauto" option for root device
	[ +13.815764] kauditd_printk_skb: 46 callbacks suppressed
	[May28 21:53] systemd-fstab-generator[5029]: Ignoring "noauto" option for root device
	[May28 21:55] systemd-fstab-generator[5306]: Ignoring "noauto" option for root device
	[  +0.062272] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:06:28 up 17 min,  0 users,  load average: 0.00, 0.03, 0.05
	Linux old-k8s-version-499466 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]:         /usr/local/go/src/net/lookup.go:299 +0x685
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc000285d40, 0x48ab5d6, 0x3, 0xc00057a9f0, 0x24, 0x0, 0x0, 0x0, ...)
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000285d40, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc00057a9f0, 0x24, 0x0, ...)
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]:         /usr/local/go/src/net/dial.go:221 +0x47d
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]: net.(*Dialer).DialContext(0xc0000ee780, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc00057a9f0, 0x24, 0x0, 0x0, 0x0, ...)
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]:         /usr/local/go/src/net/dial.go:403 +0x22b
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000a27e40, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc00057a9f0, 0x24, 0x1000000000060, 0x7f9dc5da1920, 0x118, ...)
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]: net/http.(*Transport).dial(0xc000afe000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc00057a9f0, 0x24, 0x0, 0x0, 0x0, ...)
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]: net/http.(*Transport).dialConn(0xc000afe000, 0x4f7fe00, 0xc000120018, 0x0, 0xc000920300, 0x5, 0xc00057a9f0, 0x24, 0x0, 0xc000a4c000, ...)
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]: net/http.(*Transport).dialConnFor(0xc000afe000, 0xc000bed760)
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]: created by net/http.(*Transport).queueForDial
	May 28 22:06:25 old-k8s-version-499466 kubelet[6483]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	May 28 22:06:25 old-k8s-version-499466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	May 28 22:06:25 old-k8s-version-499466 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 28 22:06:25 old-k8s-version-499466 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 28 22:06:26 old-k8s-version-499466 kubelet[6494]: I0528 22:06:26.011120    6494 server.go:416] Version: v1.20.0
	May 28 22:06:26 old-k8s-version-499466 kubelet[6494]: I0528 22:06:26.011380    6494 server.go:837] Client rotation is on, will bootstrap in background
	May 28 22:06:26 old-k8s-version-499466 kubelet[6494]: I0528 22:06:26.013277    6494 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 28 22:06:26 old-k8s-version-499466 kubelet[6494]: I0528 22:06:26.014254    6494 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	May 28 22:06:26 old-k8s-version-499466 kubelet[6494]: W0528 22:06:26.014377    6494 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-499466 -n old-k8s-version-499466
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-499466 -n old-k8s-version-499466: exit status 2 (229.626109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-499466" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (432.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-595279 -n embed-certs-595279
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-05-28 22:09:54.611490075 +0000 UTC m=+6525.367568688
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-595279 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-595279 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.779µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-595279 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-595279 -n embed-certs-595279
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-595279 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-595279 logs -n 25: (1.288334549s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-290122             | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-595279            | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-499466        | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-290122                  | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-595279                 | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-499466             | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-257793                              | cert-expiration-257793       | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807140 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	|         | disable-driver-mounts-807140                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:50 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-249165  | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC | 28 May 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC |                     |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-249165       | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC | 28 May 24 22:04 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 22:08 UTC | 28 May 24 22:08 UTC |
	| start   | -p newest-cni-588598 --memory=2200 --alsologtostderr   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:08 UTC | 28 May 24 22:09 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 22:09 UTC | 28 May 24 22:09 UTC |
	| addons  | enable metrics-server -p newest-cni-588598             | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:09 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 22:08:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 22:08:57.188803   77191 out.go:291] Setting OutFile to fd 1 ...
	I0528 22:08:57.188905   77191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:08:57.188916   77191 out.go:304] Setting ErrFile to fd 2...
	I0528 22:08:57.188923   77191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:08:57.189104   77191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 22:08:57.189793   77191 out.go:298] Setting JSON to false
	I0528 22:08:57.190777   77191 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6680,"bootTime":1716927457,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 22:08:57.190883   77191 start.go:139] virtualization: kvm guest
	I0528 22:08:57.193647   77191 out.go:177] * [newest-cni-588598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 22:08:57.195156   77191 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 22:08:57.196510   77191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 22:08:57.195140   77191 notify.go:220] Checking for updates...
	I0528 22:08:57.199229   77191 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 22:08:57.200463   77191 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 22:08:57.201796   77191 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 22:08:57.203101   77191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 22:08:57.204704   77191 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:08:57.204843   77191 config.go:182] Loaded profile config "embed-certs-595279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:08:57.204943   77191 config.go:182] Loaded profile config "no-preload-290122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:08:57.205042   77191 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 22:08:57.241350   77191 out.go:177] * Using the kvm2 driver based on user configuration
	I0528 22:08:57.242598   77191 start.go:297] selected driver: kvm2
	I0528 22:08:57.242613   77191 start.go:901] validating driver "kvm2" against <nil>
	I0528 22:08:57.242626   77191 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 22:08:57.243350   77191 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:08:57.243417   77191 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 22:08:57.258809   77191 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 22:08:57.258855   77191 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0528 22:08:57.258900   77191 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0528 22:08:57.259156   77191 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0528 22:08:57.259187   77191 cni.go:84] Creating CNI manager for ""
	I0528 22:08:57.259198   77191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 22:08:57.259210   77191 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0528 22:08:57.259283   77191 start.go:340] cluster config:
	{Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:08:57.259408   77191 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:08:57.261286   77191 out.go:177] * Starting "newest-cni-588598" primary control-plane node in "newest-cni-588598" cluster
	I0528 22:08:57.262498   77191 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 22:08:57.262535   77191 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 22:08:57.262543   77191 cache.go:56] Caching tarball of preloaded images
	I0528 22:08:57.262637   77191 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 22:08:57.262651   77191 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 22:08:57.262744   77191 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/config.json ...
	I0528 22:08:57.262761   77191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/config.json: {Name:mkc98c6d7bee8a312d7c73c8010de24ccf0ba8b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:08:57.262903   77191 start.go:360] acquireMachinesLock for newest-cni-588598: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 22:08:57.262937   77191 start.go:364] duration metric: took 18.195µs to acquireMachinesLock for "newest-cni-588598"
	I0528 22:08:57.262959   77191 start.go:93] Provisioning new machine with config: &{Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 22:08:57.263040   77191 start.go:125] createHost starting for "" (driver="kvm2")
	I0528 22:08:57.265300   77191 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 22:08:57.265444   77191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:08:57.265490   77191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:08:57.279981   77191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33121
	I0528 22:08:57.280352   77191 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:08:57.280911   77191 main.go:141] libmachine: Using API Version  1
	I0528 22:08:57.280925   77191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:08:57.281314   77191 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:08:57.281506   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:08:57.281703   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:08:57.281909   77191 start.go:159] libmachine.API.Create for "newest-cni-588598" (driver="kvm2")
	I0528 22:08:57.281957   77191 client.go:168] LocalClient.Create starting
	I0528 22:08:57.281993   77191 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem
	I0528 22:08:57.282037   77191 main.go:141] libmachine: Decoding PEM data...
	I0528 22:08:57.282060   77191 main.go:141] libmachine: Parsing certificate...
	I0528 22:08:57.282141   77191 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem
	I0528 22:08:57.282167   77191 main.go:141] libmachine: Decoding PEM data...
	I0528 22:08:57.282182   77191 main.go:141] libmachine: Parsing certificate...
	I0528 22:08:57.282207   77191 main.go:141] libmachine: Running pre-create checks...
	I0528 22:08:57.282219   77191 main.go:141] libmachine: (newest-cni-588598) Calling .PreCreateCheck
	I0528 22:08:57.282571   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetConfigRaw
	I0528 22:08:57.282954   77191 main.go:141] libmachine: Creating machine...
	I0528 22:08:57.282973   77191 main.go:141] libmachine: (newest-cni-588598) Calling .Create
	I0528 22:08:57.283144   77191 main.go:141] libmachine: (newest-cni-588598) Creating KVM machine...
	I0528 22:08:57.284183   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found existing default KVM network
	I0528 22:08:57.285737   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:57.285593   77214 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012bfa0}
	I0528 22:08:57.285784   77191 main.go:141] libmachine: (newest-cni-588598) DBG | created network xml: 
	I0528 22:08:57.285798   77191 main.go:141] libmachine: (newest-cni-588598) DBG | <network>
	I0528 22:08:57.285807   77191 main.go:141] libmachine: (newest-cni-588598) DBG |   <name>mk-newest-cni-588598</name>
	I0528 22:08:57.285820   77191 main.go:141] libmachine: (newest-cni-588598) DBG |   <dns enable='no'/>
	I0528 22:08:57.285831   77191 main.go:141] libmachine: (newest-cni-588598) DBG |   
	I0528 22:08:57.285842   77191 main.go:141] libmachine: (newest-cni-588598) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0528 22:08:57.285857   77191 main.go:141] libmachine: (newest-cni-588598) DBG |     <dhcp>
	I0528 22:08:57.285869   77191 main.go:141] libmachine: (newest-cni-588598) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0528 22:08:57.285880   77191 main.go:141] libmachine: (newest-cni-588598) DBG |     </dhcp>
	I0528 22:08:57.285897   77191 main.go:141] libmachine: (newest-cni-588598) DBG |   </ip>
	I0528 22:08:57.285909   77191 main.go:141] libmachine: (newest-cni-588598) DBG |   
	I0528 22:08:57.285920   77191 main.go:141] libmachine: (newest-cni-588598) DBG | </network>
	I0528 22:08:57.285933   77191 main.go:141] libmachine: (newest-cni-588598) DBG | 
	I0528 22:08:57.291124   77191 main.go:141] libmachine: (newest-cni-588598) DBG | trying to create private KVM network mk-newest-cni-588598 192.168.39.0/24...
	I0528 22:08:57.364504   77191 main.go:141] libmachine: (newest-cni-588598) DBG | private KVM network mk-newest-cni-588598 192.168.39.0/24 created
	I0528 22:08:57.364535   77191 main.go:141] libmachine: (newest-cni-588598) Setting up store path in /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598 ...
	I0528 22:08:57.364559   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:57.364485   77214 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 22:08:57.364580   77191 main.go:141] libmachine: (newest-cni-588598) Building disk image from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 22:08:57.364650   77191 main.go:141] libmachine: (newest-cni-588598) Downloading /home/jenkins/minikube-integration/18966-3963/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 22:08:57.609337   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:57.609211   77214 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa...
	I0528 22:08:57.755066   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:57.754931   77214 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/newest-cni-588598.rawdisk...
	I0528 22:08:57.755096   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Writing magic tar header
	I0528 22:08:57.755114   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Writing SSH key tar header
	I0528 22:08:57.755127   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:57.755078   77214 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598 ...
	I0528 22:08:57.755242   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598
	I0528 22:08:57.755275   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines
	I0528 22:08:57.755293   77191 main.go:141] libmachine: (newest-cni-588598) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598 (perms=drwx------)
	I0528 22:08:57.755307   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 22:08:57.755319   77191 main.go:141] libmachine: (newest-cni-588598) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines (perms=drwxr-xr-x)
	I0528 22:08:57.755329   77191 main.go:141] libmachine: (newest-cni-588598) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube (perms=drwxr-xr-x)
	I0528 22:08:57.755338   77191 main.go:141] libmachine: (newest-cni-588598) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963 (perms=drwxrwxr-x)
	I0528 22:08:57.755351   77191 main.go:141] libmachine: (newest-cni-588598) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0528 22:08:57.755363   77191 main.go:141] libmachine: (newest-cni-588598) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0528 22:08:57.755377   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963
	I0528 22:08:57.755391   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0528 22:08:57.755413   77191 main.go:141] libmachine: (newest-cni-588598) Creating domain...
	I0528 22:08:57.755421   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home/jenkins
	I0528 22:08:57.755429   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home
	I0528 22:08:57.755449   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Skipping /home - not owner
	I0528 22:08:57.756940   77191 main.go:141] libmachine: (newest-cni-588598) define libvirt domain using xml: 
	I0528 22:08:57.756963   77191 main.go:141] libmachine: (newest-cni-588598) <domain type='kvm'>
	I0528 22:08:57.756973   77191 main.go:141] libmachine: (newest-cni-588598)   <name>newest-cni-588598</name>
	I0528 22:08:57.756981   77191 main.go:141] libmachine: (newest-cni-588598)   <memory unit='MiB'>2200</memory>
	I0528 22:08:57.756989   77191 main.go:141] libmachine: (newest-cni-588598)   <vcpu>2</vcpu>
	I0528 22:08:57.757000   77191 main.go:141] libmachine: (newest-cni-588598)   <features>
	I0528 22:08:57.757007   77191 main.go:141] libmachine: (newest-cni-588598)     <acpi/>
	I0528 22:08:57.757021   77191 main.go:141] libmachine: (newest-cni-588598)     <apic/>
	I0528 22:08:57.757049   77191 main.go:141] libmachine: (newest-cni-588598)     <pae/>
	I0528 22:08:57.757088   77191 main.go:141] libmachine: (newest-cni-588598)     
	I0528 22:08:57.757102   77191 main.go:141] libmachine: (newest-cni-588598)   </features>
	I0528 22:08:57.757111   77191 main.go:141] libmachine: (newest-cni-588598)   <cpu mode='host-passthrough'>
	I0528 22:08:57.757123   77191 main.go:141] libmachine: (newest-cni-588598)   
	I0528 22:08:57.757134   77191 main.go:141] libmachine: (newest-cni-588598)   </cpu>
	I0528 22:08:57.757145   77191 main.go:141] libmachine: (newest-cni-588598)   <os>
	I0528 22:08:57.757155   77191 main.go:141] libmachine: (newest-cni-588598)     <type>hvm</type>
	I0528 22:08:57.757174   77191 main.go:141] libmachine: (newest-cni-588598)     <boot dev='cdrom'/>
	I0528 22:08:57.757190   77191 main.go:141] libmachine: (newest-cni-588598)     <boot dev='hd'/>
	I0528 22:08:57.757199   77191 main.go:141] libmachine: (newest-cni-588598)     <bootmenu enable='no'/>
	I0528 22:08:57.757206   77191 main.go:141] libmachine: (newest-cni-588598)   </os>
	I0528 22:08:57.757213   77191 main.go:141] libmachine: (newest-cni-588598)   <devices>
	I0528 22:08:57.757221   77191 main.go:141] libmachine: (newest-cni-588598)     <disk type='file' device='cdrom'>
	I0528 22:08:57.757238   77191 main.go:141] libmachine: (newest-cni-588598)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/boot2docker.iso'/>
	I0528 22:08:57.757251   77191 main.go:141] libmachine: (newest-cni-588598)       <target dev='hdc' bus='scsi'/>
	I0528 22:08:57.757275   77191 main.go:141] libmachine: (newest-cni-588598)       <readonly/>
	I0528 22:08:57.757299   77191 main.go:141] libmachine: (newest-cni-588598)     </disk>
	I0528 22:08:57.757311   77191 main.go:141] libmachine: (newest-cni-588598)     <disk type='file' device='disk'>
	I0528 22:08:57.757323   77191 main.go:141] libmachine: (newest-cni-588598)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0528 22:08:57.757340   77191 main.go:141] libmachine: (newest-cni-588598)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/newest-cni-588598.rawdisk'/>
	I0528 22:08:57.757366   77191 main.go:141] libmachine: (newest-cni-588598)       <target dev='hda' bus='virtio'/>
	I0528 22:08:57.757380   77191 main.go:141] libmachine: (newest-cni-588598)     </disk>
	I0528 22:08:57.757388   77191 main.go:141] libmachine: (newest-cni-588598)     <interface type='network'>
	I0528 22:08:57.757397   77191 main.go:141] libmachine: (newest-cni-588598)       <source network='mk-newest-cni-588598'/>
	I0528 22:08:57.757406   77191 main.go:141] libmachine: (newest-cni-588598)       <model type='virtio'/>
	I0528 22:08:57.757414   77191 main.go:141] libmachine: (newest-cni-588598)     </interface>
	I0528 22:08:57.757428   77191 main.go:141] libmachine: (newest-cni-588598)     <interface type='network'>
	I0528 22:08:57.757439   77191 main.go:141] libmachine: (newest-cni-588598)       <source network='default'/>
	I0528 22:08:57.757447   77191 main.go:141] libmachine: (newest-cni-588598)       <model type='virtio'/>
	I0528 22:08:57.757458   77191 main.go:141] libmachine: (newest-cni-588598)     </interface>
	I0528 22:08:57.757469   77191 main.go:141] libmachine: (newest-cni-588598)     <serial type='pty'>
	I0528 22:08:57.757480   77191 main.go:141] libmachine: (newest-cni-588598)       <target port='0'/>
	I0528 22:08:57.757490   77191 main.go:141] libmachine: (newest-cni-588598)     </serial>
	I0528 22:08:57.757499   77191 main.go:141] libmachine: (newest-cni-588598)     <console type='pty'>
	I0528 22:08:57.757510   77191 main.go:141] libmachine: (newest-cni-588598)       <target type='serial' port='0'/>
	I0528 22:08:57.757519   77191 main.go:141] libmachine: (newest-cni-588598)     </console>
	I0528 22:08:57.757527   77191 main.go:141] libmachine: (newest-cni-588598)     <rng model='virtio'>
	I0528 22:08:57.757536   77191 main.go:141] libmachine: (newest-cni-588598)       <backend model='random'>/dev/random</backend>
	I0528 22:08:57.757543   77191 main.go:141] libmachine: (newest-cni-588598)     </rng>
	I0528 22:08:57.757551   77191 main.go:141] libmachine: (newest-cni-588598)     
	I0528 22:08:57.757557   77191 main.go:141] libmachine: (newest-cni-588598)     
	I0528 22:08:57.757564   77191 main.go:141] libmachine: (newest-cni-588598)   </devices>
	I0528 22:08:57.757570   77191 main.go:141] libmachine: (newest-cni-588598) </domain>
	I0528 22:08:57.757579   77191 main.go:141] libmachine: (newest-cni-588598) 
	I0528 22:08:57.762195   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:fb:4b:f7 in network default
	I0528 22:08:57.762790   77191 main.go:141] libmachine: (newest-cni-588598) Ensuring networks are active...
	I0528 22:08:57.762813   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:08:57.763540   77191 main.go:141] libmachine: (newest-cni-588598) Ensuring network default is active
	I0528 22:08:57.763975   77191 main.go:141] libmachine: (newest-cni-588598) Ensuring network mk-newest-cni-588598 is active
	I0528 22:08:57.764617   77191 main.go:141] libmachine: (newest-cni-588598) Getting domain xml...
	I0528 22:08:57.765473   77191 main.go:141] libmachine: (newest-cni-588598) Creating domain...
	I0528 22:08:59.027894   77191 main.go:141] libmachine: (newest-cni-588598) Waiting to get IP...
	I0528 22:08:59.028655   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:08:59.029051   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:08:59.029098   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:59.029032   77214 retry.go:31] will retry after 285.280112ms: waiting for machine to come up
	I0528 22:08:59.315531   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:08:59.315990   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:08:59.316023   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:59.315931   77214 retry.go:31] will retry after 350.098141ms: waiting for machine to come up
	I0528 22:08:59.667279   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:08:59.667732   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:08:59.667764   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:59.667689   77214 retry.go:31] will retry after 456.545841ms: waiting for machine to come up
	I0528 22:09:00.126444   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:00.126951   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:00.126974   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:00.126893   77214 retry.go:31] will retry after 385.534431ms: waiting for machine to come up
	I0528 22:09:00.514526   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:00.514990   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:00.515017   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:00.514958   77214 retry.go:31] will retry after 593.263865ms: waiting for machine to come up
	I0528 22:09:01.110012   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:01.110500   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:01.110534   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:01.110435   77214 retry.go:31] will retry after 594.648578ms: waiting for machine to come up
	I0528 22:09:01.706760   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:01.707215   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:01.707277   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:01.707197   77214 retry.go:31] will retry after 877.470046ms: waiting for machine to come up
	I0528 22:09:02.586444   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:02.586845   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:02.586932   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:02.586839   77214 retry.go:31] will retry after 1.23527304s: waiting for machine to come up
	I0528 22:09:03.823483   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:03.824008   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:03.824037   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:03.823960   77214 retry.go:31] will retry after 1.43309336s: waiting for machine to come up
	I0528 22:09:05.258858   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:05.259343   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:05.259366   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:05.259276   77214 retry.go:31] will retry after 2.220590768s: waiting for machine to come up
	I0528 22:09:07.481727   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:07.482296   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:07.482328   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:07.482212   77214 retry.go:31] will retry after 2.56599614s: waiting for machine to come up
	I0528 22:09:10.051062   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:10.051694   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:10.051729   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:10.051617   77214 retry.go:31] will retry after 3.175068668s: waiting for machine to come up
	I0528 22:09:13.228105   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:13.228532   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:13.228559   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:13.228478   77214 retry.go:31] will retry after 2.777270754s: waiting for machine to come up
	I0528 22:09:16.009327   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:16.009741   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:16.009784   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:16.009695   77214 retry.go:31] will retry after 4.889591222s: waiting for machine to come up
	I0528 22:09:20.903488   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:20.903995   77191 main.go:141] libmachine: (newest-cni-588598) Found IP for machine: 192.168.39.57
	I0528 22:09:20.904023   77191 main.go:141] libmachine: (newest-cni-588598) Reserving static IP address...
	I0528 22:09:20.904037   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has current primary IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:20.904350   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find host DHCP lease matching {name: "newest-cni-588598", mac: "52:54:00:a4:df:c4", ip: "192.168.39.57"} in network mk-newest-cni-588598
	I0528 22:09:20.982294   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Getting to WaitForSSH function...
	I0528 22:09:20.982319   77191 main.go:141] libmachine: (newest-cni-588598) Reserved static IP address: 192.168.39.57
	I0528 22:09:20.982344   77191 main.go:141] libmachine: (newest-cni-588598) Waiting for SSH to be available...
	I0528 22:09:20.984946   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:20.985323   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:20.985349   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:20.985533   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Using SSH client type: external
	I0528 22:09:20.985558   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa (-rw-------)
	I0528 22:09:20.985597   77191 main.go:141] libmachine: (newest-cni-588598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 22:09:20.985613   77191 main.go:141] libmachine: (newest-cni-588598) DBG | About to run SSH command:
	I0528 22:09:20.985641   77191 main.go:141] libmachine: (newest-cni-588598) DBG | exit 0
	I0528 22:09:21.113932   77191 main.go:141] libmachine: (newest-cni-588598) DBG | SSH cmd err, output: <nil>: 
	I0528 22:09:21.114194   77191 main.go:141] libmachine: (newest-cni-588598) KVM machine creation complete!
	I0528 22:09:21.114515   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetConfigRaw
	I0528 22:09:21.115045   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:21.115210   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:21.115374   77191 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0528 22:09:21.115387   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:09:21.116779   77191 main.go:141] libmachine: Detecting operating system of created instance...
	I0528 22:09:21.116792   77191 main.go:141] libmachine: Waiting for SSH to be available...
	I0528 22:09:21.116797   77191 main.go:141] libmachine: Getting to WaitForSSH function...
	I0528 22:09:21.116803   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.119454   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.119824   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.119850   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.120014   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:21.120207   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.120371   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.120525   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:21.120673   77191 main.go:141] libmachine: Using SSH client type: native
	I0528 22:09:21.120912   77191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:09:21.120928   77191 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0528 22:09:21.225310   77191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 22:09:21.225350   77191 main.go:141] libmachine: Detecting the provisioner...
	I0528 22:09:21.225362   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.228359   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.228634   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.228661   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.228823   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:21.229063   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.229220   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.229417   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:21.229593   77191 main.go:141] libmachine: Using SSH client type: native
	I0528 22:09:21.229777   77191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:09:21.229791   77191 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0528 22:09:21.338855   77191 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0528 22:09:21.338924   77191 main.go:141] libmachine: found compatible host: buildroot
	I0528 22:09:21.338937   77191 main.go:141] libmachine: Provisioning with buildroot...
	I0528 22:09:21.338945   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:09:21.339214   77191 buildroot.go:166] provisioning hostname "newest-cni-588598"
	I0528 22:09:21.339241   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:09:21.339479   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.342126   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.342482   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.342512   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.342620   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:21.342805   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.342959   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.343100   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:21.343235   77191 main.go:141] libmachine: Using SSH client type: native
	I0528 22:09:21.343409   77191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:09:21.343426   77191 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-588598 && echo "newest-cni-588598" | sudo tee /etc/hostname
	I0528 22:09:21.464742   77191 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-588598
	
	I0528 22:09:21.464767   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.467630   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.467969   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.468007   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.468095   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:21.468279   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.468429   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.468579   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:21.468767   77191 main.go:141] libmachine: Using SSH client type: native
	I0528 22:09:21.468978   77191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:09:21.468996   77191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-588598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-588598/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-588598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 22:09:21.587539   77191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 22:09:21.587566   77191 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 22:09:21.587605   77191 buildroot.go:174] setting up certificates
	I0528 22:09:21.587613   77191 provision.go:84] configureAuth start
	I0528 22:09:21.587621   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:09:21.587911   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:09:21.590786   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.591127   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.591167   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.591300   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.593582   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.593894   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.593922   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.594077   77191 provision.go:143] copyHostCerts
	I0528 22:09:21.594160   77191 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 22:09:21.594176   77191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 22:09:21.594262   77191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 22:09:21.594384   77191 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 22:09:21.594396   77191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 22:09:21.594434   77191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 22:09:21.594522   77191 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 22:09:21.594539   77191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 22:09:21.594578   77191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 22:09:21.594658   77191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.newest-cni-588598 san=[127.0.0.1 192.168.39.57 localhost minikube newest-cni-588598]
	I0528 22:09:21.670605   77191 provision.go:177] copyRemoteCerts
	I0528 22:09:21.670652   77191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 22:09:21.670672   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.673616   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.673969   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.673996   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.674190   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:21.674352   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.674527   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:21.674637   77191 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:09:21.760722   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 22:09:21.787677   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0528 22:09:21.813476   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 22:09:21.839238   77191 provision.go:87] duration metric: took 251.615231ms to configureAuth
	I0528 22:09:21.839263   77191 buildroot.go:189] setting minikube options for container-runtime
	I0528 22:09:21.839468   77191 config.go:182] Loaded profile config "newest-cni-588598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:09:21.839536   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.842066   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.842445   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.842485   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.842621   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:21.842801   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.842983   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.843163   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:21.843345   77191 main.go:141] libmachine: Using SSH client type: native
	I0528 22:09:21.843500   77191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:09:21.843517   77191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 22:09:22.109265   77191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 22:09:22.109294   77191 main.go:141] libmachine: Checking connection to Docker...
	I0528 22:09:22.109313   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetURL
	I0528 22:09:22.110745   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Using libvirt version 6000000
	I0528 22:09:22.113206   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.113575   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.113600   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.113845   77191 main.go:141] libmachine: Docker is up and running!
	I0528 22:09:22.113864   77191 main.go:141] libmachine: Reticulating splines...
	I0528 22:09:22.113871   77191 client.go:171] duration metric: took 24.83190304s to LocalClient.Create
	I0528 22:09:22.113897   77191 start.go:167] duration metric: took 24.831990112s to libmachine.API.Create "newest-cni-588598"
	I0528 22:09:22.113910   77191 start.go:293] postStartSetup for "newest-cni-588598" (driver="kvm2")
	I0528 22:09:22.113922   77191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 22:09:22.113940   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:22.114157   77191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 22:09:22.114179   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:22.116516   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.116875   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.116912   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.117073   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:22.117255   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:22.117447   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:22.117615   77191 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:09:22.200715   77191 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 22:09:22.205294   77191 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 22:09:22.205318   77191 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 22:09:22.205374   77191 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 22:09:22.205463   77191 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 22:09:22.205546   77191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 22:09:22.215225   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 22:09:22.240466   77191 start.go:296] duration metric: took 126.546231ms for postStartSetup
	I0528 22:09:22.240522   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetConfigRaw
	I0528 22:09:22.241098   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:09:22.243958   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.244319   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.244335   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.244627   77191 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/config.json ...
	I0528 22:09:22.244785   77191 start.go:128] duration metric: took 24.981737676s to createHost
	I0528 22:09:22.244805   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:22.247128   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.247519   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.247548   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.247668   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:22.247843   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:22.247997   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:22.248116   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:22.248331   77191 main.go:141] libmachine: Using SSH client type: native
	I0528 22:09:22.248532   77191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:09:22.248547   77191 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 22:09:22.354799   77191 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716934162.331502714
	
	I0528 22:09:22.354829   77191 fix.go:216] guest clock: 1716934162.331502714
	I0528 22:09:22.354839   77191 fix.go:229] Guest: 2024-05-28 22:09:22.331502714 +0000 UTC Remote: 2024-05-28 22:09:22.24479663 +0000 UTC m=+25.089878355 (delta=86.706084ms)
	I0528 22:09:22.354894   77191 fix.go:200] guest clock delta is within tolerance: 86.706084ms
	I0528 22:09:22.354921   77191 start.go:83] releasing machines lock for "newest-cni-588598", held for 25.091972651s
	I0528 22:09:22.354952   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:22.355257   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:09:22.358210   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.358600   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.358629   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.358790   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:22.359286   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:22.359446   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:22.359540   77191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 22:09:22.359574   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:22.359608   77191 ssh_runner.go:195] Run: cat /version.json
	I0528 22:09:22.359630   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:22.362337   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.362567   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.362677   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.362707   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.362902   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:22.362980   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.363019   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.363088   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:22.363286   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:22.363301   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:22.363471   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:22.363480   77191 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:09:22.363621   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:22.363766   77191 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:09:22.443232   77191 ssh_runner.go:195] Run: systemctl --version
	I0528 22:09:22.479460   77191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 22:09:22.649426   77191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 22:09:22.656514   77191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 22:09:22.656570   77191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 22:09:22.672651   77191 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 22:09:22.672676   77191 start.go:494] detecting cgroup driver to use...
	I0528 22:09:22.672747   77191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 22:09:22.695626   77191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 22:09:22.710816   77191 docker.go:217] disabling cri-docker service (if available) ...
	I0528 22:09:22.710901   77191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 22:09:22.724719   77191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 22:09:22.740781   77191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 22:09:22.862590   77191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 22:09:23.030745   77191 docker.go:233] disabling docker service ...
	I0528 22:09:23.030821   77191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 22:09:23.046614   77191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 22:09:23.060615   77191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 22:09:23.183429   77191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 22:09:23.306112   77191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 22:09:23.321381   77191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 22:09:23.341737   77191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 22:09:23.341819   77191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.353025   77191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 22:09:23.353084   77191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.365142   77191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.376442   77191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.389445   77191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 22:09:23.402881   77191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.414972   77191 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.434796   77191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.447421   77191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 22:09:23.457280   77191 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 22:09:23.457349   77191 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 22:09:23.470855   77191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 22:09:23.481523   77191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:09:23.612297   77191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 22:09:23.757883   77191 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 22:09:23.757962   77191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 22:09:23.762952   77191 start.go:562] Will wait 60s for crictl version
	I0528 22:09:23.763006   77191 ssh_runner.go:195] Run: which crictl
	I0528 22:09:23.767408   77191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 22:09:23.812782   77191 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 22:09:23.812860   77191 ssh_runner.go:195] Run: crio --version
	I0528 22:09:23.846562   77191 ssh_runner.go:195] Run: crio --version
	I0528 22:09:23.877871   77191 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 22:09:23.879118   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:09:23.882110   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:23.882431   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:23.882455   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:23.882794   77191 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 22:09:23.887855   77191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:09:23.902912   77191 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0528 22:09:23.904225   77191 kubeadm.go:877] updating cluster {Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 22:09:23.904382   77191 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 22:09:23.904467   77191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 22:09:23.937380   77191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0528 22:09:23.937438   77191 ssh_runner.go:195] Run: which lz4
	I0528 22:09:23.941391   77191 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 22:09:23.945567   77191 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 22:09:23.945591   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0528 22:09:25.398547   77191 crio.go:462] duration metric: took 1.457207803s to copy over tarball
	I0528 22:09:25.398641   77191 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 22:09:27.659841   77191 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.261160135s)
	I0528 22:09:27.659880   77191 crio.go:469] duration metric: took 2.261308948s to extract the tarball
	I0528 22:09:27.659889   77191 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 22:09:27.698544   77191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 22:09:27.742713   77191 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 22:09:27.742742   77191 cache_images.go:84] Images are preloaded, skipping loading
	I0528 22:09:27.742753   77191 kubeadm.go:928] updating node { 192.168.39.57 8443 v1.30.1 crio true true} ...
	I0528 22:09:27.742864   77191 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-588598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 22:09:27.742945   77191 ssh_runner.go:195] Run: crio config
	I0528 22:09:27.791640   77191 cni.go:84] Creating CNI manager for ""
	I0528 22:09:27.791661   77191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 22:09:27.791674   77191 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0528 22:09:27.791696   77191 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-588598 NodeName:newest-cni-588598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 22:09:27.791828   77191 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-588598"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 22:09:27.791885   77191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 22:09:27.802980   77191 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 22:09:27.803046   77191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 22:09:27.813409   77191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0528 22:09:27.830964   77191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 22:09:27.848413   77191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0528 22:09:27.867194   77191 ssh_runner.go:195] Run: grep 192.168.39.57	control-plane.minikube.internal$ /etc/hosts
	I0528 22:09:27.871636   77191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.57	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:09:27.886463   77191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:09:28.028806   77191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:09:28.047507   77191 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598 for IP: 192.168.39.57
	I0528 22:09:28.047531   77191 certs.go:194] generating shared ca certs ...
	I0528 22:09:28.047555   77191 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:09:28.047729   77191 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 22:09:28.047796   77191 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 22:09:28.047810   77191 certs.go:256] generating profile certs ...
	I0528 22:09:28.047881   77191 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/client.key
	I0528 22:09:28.047900   77191 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/client.crt with IP's: []
	I0528 22:09:28.472108   77191 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/client.crt ...
	I0528 22:09:28.472151   77191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/client.crt: {Name:mk2d1213692383268be5f0d0ae3bbbf6ba3eabd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:09:28.472316   77191 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/client.key ...
	I0528 22:09:28.472329   77191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/client.key: {Name:mkfbdfebd82ffbfde0c4cce256069ee74eb8ff3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:09:28.472445   77191 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.key.3d9132ba
	I0528 22:09:28.472467   77191 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.crt.3d9132ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.57]
	I0528 22:09:28.584779   77191 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.crt.3d9132ba ...
	I0528 22:09:28.584807   77191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.crt.3d9132ba: {Name:mkfb461d63db108c5e0991bed7131c5ad7fec8d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:09:28.584961   77191 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.key.3d9132ba ...
	I0528 22:09:28.584974   77191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.key.3d9132ba: {Name:mk1c60c76724ea1d6b3f4756410770de82af2db2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:09:28.585038   77191 certs.go:381] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.crt.3d9132ba -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.crt
	I0528 22:09:28.585121   77191 certs.go:385] copying /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.key.3d9132ba -> /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.key
	I0528 22:09:28.585180   77191 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.key
	I0528 22:09:28.585197   77191 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.crt with IP's: []
	I0528 22:09:28.883706   77191 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.crt ...
	I0528 22:09:28.883732   77191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.crt: {Name:mk669fb5e9177f96fcb983dab8b4585324abcff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:09:28.883884   77191 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.key ...
	I0528 22:09:28.883896   77191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.key: {Name:mk97d1d2c83a842868f74ab58c8b2edc55669b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:09:28.884064   77191 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 22:09:28.884100   77191 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 22:09:28.884109   77191 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 22:09:28.884128   77191 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 22:09:28.884149   77191 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 22:09:28.884169   77191 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 22:09:28.884203   77191 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 22:09:28.884794   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 22:09:28.925617   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 22:09:28.955177   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 22:09:28.985077   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 22:09:29.010684   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 22:09:29.037469   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 22:09:29.063390   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 22:09:29.089775   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 22:09:29.116046   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 22:09:29.142962   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 22:09:29.168422   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 22:09:29.192702   77191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 22:09:29.210702   77191 ssh_runner.go:195] Run: openssl version
	I0528 22:09:29.217279   77191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 22:09:29.230323   77191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:09:29.235536   77191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:09:29.235590   77191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:09:29.241885   77191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 22:09:29.254284   77191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 22:09:29.266858   77191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 22:09:29.272231   77191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 22:09:29.272278   77191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 22:09:29.278804   77191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 22:09:29.290571   77191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 22:09:29.303056   77191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 22:09:29.307809   77191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 22:09:29.307869   77191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 22:09:29.314042   77191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 22:09:29.326767   77191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 22:09:29.331094   77191 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 22:09:29.331153   77191 kubeadm.go:391] StartCluster: {Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:09:29.331301   77191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 22:09:29.331358   77191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 22:09:29.386773   77191 cri.go:89] found id: ""
	I0528 22:09:29.386846   77191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 22:09:29.397368   77191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 22:09:29.407460   77191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 22:09:29.417417   77191 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 22:09:29.417447   77191 kubeadm.go:156] found existing configuration files:
	
	I0528 22:09:29.417510   77191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 22:09:29.427745   77191 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 22:09:29.427796   77191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 22:09:29.438248   77191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 22:09:29.448991   77191 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 22:09:29.449052   77191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 22:09:29.459189   77191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 22:09:29.468186   77191 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 22:09:29.468240   77191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 22:09:29.478012   77191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 22:09:29.488287   77191 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 22:09:29.488384   77191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 22:09:29.497858   77191 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 22:09:29.783677   77191 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 22:09:39.543550   77191 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 22:09:39.543627   77191 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 22:09:39.543703   77191 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 22:09:39.543840   77191 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 22:09:39.543945   77191 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 22:09:39.544028   77191 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 22:09:39.545613   77191 out.go:204]   - Generating certificates and keys ...
	I0528 22:09:39.545704   77191 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 22:09:39.545788   77191 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 22:09:39.545884   77191 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 22:09:39.545984   77191 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 22:09:39.546088   77191 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 22:09:39.546153   77191 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 22:09:39.546199   77191 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 22:09:39.546303   77191 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-588598] and IPs [192.168.39.57 127.0.0.1 ::1]
	I0528 22:09:39.546353   77191 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 22:09:39.546468   77191 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-588598] and IPs [192.168.39.57 127.0.0.1 ::1]
	I0528 22:09:39.546532   77191 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 22:09:39.546584   77191 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 22:09:39.546620   77191 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 22:09:39.546666   77191 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 22:09:39.546712   77191 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 22:09:39.546761   77191 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 22:09:39.546830   77191 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 22:09:39.546940   77191 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 22:09:39.547017   77191 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 22:09:39.547090   77191 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 22:09:39.547142   77191 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 22:09:39.548714   77191 out.go:204]   - Booting up control plane ...
	I0528 22:09:39.548812   77191 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 22:09:39.548877   77191 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 22:09:39.548929   77191 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 22:09:39.549041   77191 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 22:09:39.549142   77191 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 22:09:39.549197   77191 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 22:09:39.549366   77191 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 22:09:39.549474   77191 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 22:09:39.549553   77191 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 503.259802ms
	I0528 22:09:39.549651   77191 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 22:09:39.549734   77191 kubeadm.go:309] [api-check] The API server is healthy after 5.001514292s
	I0528 22:09:39.549878   77191 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 22:09:39.550030   77191 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 22:09:39.550106   77191 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 22:09:39.550296   77191 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-588598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 22:09:39.550370   77191 kubeadm.go:309] [bootstrap-token] Using token: q2uxcc.x3dqwribp43a4rmh
	I0528 22:09:39.551598   77191 out.go:204]   - Configuring RBAC rules ...
	I0528 22:09:39.551686   77191 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 22:09:39.551768   77191 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 22:09:39.551930   77191 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 22:09:39.552110   77191 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 22:09:39.552257   77191 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 22:09:39.552376   77191 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 22:09:39.552537   77191 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 22:09:39.552586   77191 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 22:09:39.552641   77191 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 22:09:39.552650   77191 kubeadm.go:309] 
	I0528 22:09:39.552720   77191 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 22:09:39.552730   77191 kubeadm.go:309] 
	I0528 22:09:39.552816   77191 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 22:09:39.552831   77191 kubeadm.go:309] 
	I0528 22:09:39.552872   77191 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 22:09:39.552949   77191 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 22:09:39.553016   77191 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 22:09:39.553026   77191 kubeadm.go:309] 
	I0528 22:09:39.553095   77191 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 22:09:39.553104   77191 kubeadm.go:309] 
	I0528 22:09:39.553174   77191 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 22:09:39.553183   77191 kubeadm.go:309] 
	I0528 22:09:39.553256   77191 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 22:09:39.553362   77191 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 22:09:39.553453   77191 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 22:09:39.553462   77191 kubeadm.go:309] 
	I0528 22:09:39.553600   77191 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 22:09:39.553704   77191 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 22:09:39.553714   77191 kubeadm.go:309] 
	I0528 22:09:39.553835   77191 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token q2uxcc.x3dqwribp43a4rmh \
	I0528 22:09:39.553963   77191 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb \
	I0528 22:09:39.554002   77191 kubeadm.go:309] 	--control-plane 
	I0528 22:09:39.554011   77191 kubeadm.go:309] 
	I0528 22:09:39.554119   77191 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 22:09:39.554129   77191 kubeadm.go:309] 
	I0528 22:09:39.554232   77191 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token q2uxcc.x3dqwribp43a4rmh \
	I0528 22:09:39.554387   77191 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb 
	I0528 22:09:39.554404   77191 cni.go:84] Creating CNI manager for ""
	I0528 22:09:39.554416   77191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 22:09:39.555827   77191 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 22:09:39.557096   77191 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 22:09:39.569413   77191 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 22:09:39.590546   77191 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 22:09:39.590615   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:39.590627   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-588598 minikube.k8s.io/updated_at=2024_05_28T22_09_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=newest-cni-588598 minikube.k8s.io/primary=true
	I0528 22:09:39.609624   77191 ops.go:34] apiserver oom_adj: -16
	I0528 22:09:39.797911   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:40.298445   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:40.797933   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:41.298900   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:41.798391   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:42.298372   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:42.798843   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:43.298749   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:43.798590   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:44.298978   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:44.798481   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:45.298827   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:45.798618   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:46.298991   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:46.798299   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:47.298602   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:47.798932   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:48.298331   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:48.798688   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:49.298974   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:49.798968   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:50.298974   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:50.798793   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:51.298017   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:51.798295   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:52.297912   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:52.797931   77191 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:09:52.985150   77191 kubeadm.go:1107] duration metric: took 13.394588513s to wait for elevateKubeSystemPrivileges
	W0528 22:09:52.985184   77191 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 22:09:52.985191   77191 kubeadm.go:393] duration metric: took 23.65404259s to StartCluster
	I0528 22:09:52.985208   77191 settings.go:142] acquiring lock: {Name:mkc1182212d780ab38539839372c25eb989b2e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:09:52.985290   77191 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 22:09:52.987619   77191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:09:52.987887   77191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0528 22:09:52.987906   77191 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 22:09:52.989554   77191 out.go:177] * Verifying Kubernetes components...
	I0528 22:09:52.987983   77191 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 22:09:52.988129   77191 config.go:182] Loaded profile config "newest-cni-588598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:09:52.990879   77191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:09:52.989612   77191 addons.go:69] Setting default-storageclass=true in profile "newest-cni-588598"
	I0528 22:09:52.990954   77191 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-588598"
	I0528 22:09:52.989614   77191 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-588598"
	I0528 22:09:52.991040   77191 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-588598"
	I0528 22:09:52.991076   77191 host.go:66] Checking if "newest-cni-588598" exists ...
	I0528 22:09:52.991395   77191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:09:52.991426   77191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:09:52.991431   77191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:09:52.991464   77191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:09:53.007200   77191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40717
	I0528 22:09:53.007464   77191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I0528 22:09:53.007770   77191 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:09:53.007901   77191 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:09:53.008284   77191 main.go:141] libmachine: Using API Version  1
	I0528 22:09:53.008305   77191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:09:53.008513   77191 main.go:141] libmachine: Using API Version  1
	I0528 22:09:53.008530   77191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:09:53.008636   77191 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:09:53.008836   77191 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:09:53.008840   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:09:53.009273   77191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:09:53.009311   77191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:09:53.013312   77191 addons.go:234] Setting addon default-storageclass=true in "newest-cni-588598"
	I0528 22:09:53.013354   77191 host.go:66] Checking if "newest-cni-588598" exists ...
	I0528 22:09:53.013864   77191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:09:53.013910   77191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:09:53.024769   77191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I0528 22:09:53.025231   77191 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:09:53.025842   77191 main.go:141] libmachine: Using API Version  1
	I0528 22:09:53.025867   77191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:09:53.026363   77191 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:09:53.026625   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:09:53.028560   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:53.030379   77191 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:09:53.031586   77191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43111
	I0528 22:09:53.034876   77191 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:09:53.034899   77191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 22:09:53.034924   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:53.035285   77191 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:09:53.035783   77191 main.go:141] libmachine: Using API Version  1
	I0528 22:09:53.035802   77191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:09:53.036224   77191 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:09:53.036855   77191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:09:53.036882   77191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:09:53.038486   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:53.038885   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:53.038919   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:53.039183   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:53.039612   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:53.039786   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:53.039912   77191 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:09:53.055112   77191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I0528 22:09:53.055480   77191 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:09:53.055941   77191 main.go:141] libmachine: Using API Version  1
	I0528 22:09:53.055959   77191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:09:53.056240   77191 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:09:53.056416   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:09:53.058100   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:53.058318   77191 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 22:09:53.058332   77191 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 22:09:53.058357   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:53.061347   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:53.061726   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:53.061748   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:53.061911   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:53.062083   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:53.062219   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:53.062357   77191 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:09:53.354827   77191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:09:53.355254   77191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0528 22:09:53.382558   77191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:09:53.422026   77191 api_server.go:52] waiting for apiserver process to appear ...
	I0528 22:09:53.422103   77191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:09:53.425266   77191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:09:53.937415   77191 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0528 22:09:54.399575   77191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.016978399s)
	I0528 22:09:54.399625   77191 main.go:141] libmachine: Making call to close driver server
	I0528 22:09:54.399627   77191 api_server.go:72] duration metric: took 1.411681978s to wait for apiserver process to appear ...
	I0528 22:09:54.399645   77191 api_server.go:88] waiting for apiserver healthz status ...
	I0528 22:09:54.399667   77191 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:09:54.399673   77191 main.go:141] libmachine: Making call to close driver server
	I0528 22:09:54.399694   77191 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:09:54.399637   77191 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:09:54.399941   77191 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:09:54.399952   77191 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:09:54.399960   77191 main.go:141] libmachine: Making call to close driver server
	I0528 22:09:54.399966   77191 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:09:54.400321   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Closing plugin on server side
	I0528 22:09:54.400324   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Closing plugin on server side
	I0528 22:09:54.400328   77191 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:09:54.400348   77191 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:09:54.400363   77191 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:09:54.400372   77191 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:09:54.400380   77191 main.go:141] libmachine: Making call to close driver server
	I0528 22:09:54.400387   77191 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:09:54.400920   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Closing plugin on server side
	I0528 22:09:54.400980   77191 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:09:54.400998   77191 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:09:54.424468   77191 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I0528 22:09:54.427462   77191 api_server.go:141] control plane version: v1.30.1
	I0528 22:09:54.427485   77191 api_server.go:131] duration metric: took 27.832418ms to wait for apiserver health ...
	I0528 22:09:54.427501   77191 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 22:09:54.433834   77191 main.go:141] libmachine: Making call to close driver server
	I0528 22:09:54.433858   77191 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:09:54.434221   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Closing plugin on server side
	I0528 22:09:54.434279   77191 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:09:54.434297   77191 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:09:54.435714   77191 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0528 22:09:54.437095   77191 addons.go:510] duration metric: took 1.449109447s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0528 22:09:54.445575   77191 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-588598" context rescaled to 1 replicas
	I0528 22:09:54.445715   77191 system_pods.go:59] 8 kube-system pods found
	I0528 22:09:54.445742   77191 system_pods.go:61] "coredns-7db6d8ff4d-wk5f4" [9dcd7b17-fc19-4468-b8f9-76a2fb7f1ec9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:09:54.445751   77191 system_pods.go:61] "coredns-7db6d8ff4d-xlvtt" [9bf83005-a198-473a-8f61-36c9b3e5cad4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:09:54.445777   77191 system_pods.go:61] "etcd-newest-cni-588598" [785dbf00-a5a6-4946-8a36-6200a875dbcc] Running
	I0528 22:09:54.445785   77191 system_pods.go:61] "kube-apiserver-newest-cni-588598" [c9b79154-b6b7-494e-92b1-c447580db787] Running
	I0528 22:09:54.445790   77191 system_pods.go:61] "kube-controller-manager-newest-cni-588598" [f14bfaa9-0a88-4c01-9065-765797138f5d] Running
	I0528 22:09:54.445795   77191 system_pods.go:61] "kube-proxy-8jgfw" [8125c94f-11df-4eee-8612-9546dc054146] Running
	I0528 22:09:54.445801   77191 system_pods.go:61] "kube-scheduler-newest-cni-588598" [3e3160b5-e111-4a5e-9082-c9ae2a6633c7] Running
	I0528 22:09:54.445805   77191 system_pods.go:61] "storage-provisioner" [9993a26e-0e7d-45d6-ac6f-3672e3390ba5] Pending
	I0528 22:09:54.445813   77191 system_pods.go:74] duration metric: took 18.304658ms to wait for pod list to return data ...
	I0528 22:09:54.445823   77191 default_sa.go:34] waiting for default service account to be created ...
	I0528 22:09:54.448660   77191 default_sa.go:45] found service account: "default"
	I0528 22:09:54.448682   77191 default_sa.go:55] duration metric: took 2.85216ms for default service account to be created ...
	I0528 22:09:54.448694   77191 kubeadm.go:576] duration metric: took 1.460752684s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0528 22:09:54.448712   77191 node_conditions.go:102] verifying NodePressure condition ...
	I0528 22:09:54.453036   77191 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 22:09:54.453072   77191 node_conditions.go:123] node cpu capacity is 2
	I0528 22:09:54.453085   77191 node_conditions.go:105] duration metric: took 4.366333ms to run NodePressure ...
	I0528 22:09:54.453098   77191 start.go:240] waiting for startup goroutines ...
	I0528 22:09:54.453108   77191 start.go:245] waiting for cluster config update ...
	I0528 22:09:54.453126   77191 start.go:254] writing updated cluster config ...
	I0528 22:09:54.453468   77191 ssh_runner.go:195] Run: rm -f paused
	I0528 22:09:54.515230   77191 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 22:09:54.517295   77191 out.go:177] * Done! kubectl is now configured to use "newest-cni-588598" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.242112504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934195242074693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b395792-122b-44eb-9a34-5dbf972f7bfd name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.247220476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48796dea-13a5-4c68-94a2-2e0ec7046bc2 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.247299170Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48796dea-13a5-4c68-94a2-2e0ec7046bc2 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.247721758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6334a28f9d29a95b433857aede8e5afb904c6d2e764f1afa093ca5e7c09de09,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716932983414073366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17521a4ecdae6117bdf145c9974f8c008f247f6115ecbd86caf00c69bc3a76ab,PodSandboxId:db5c23ed716b1d80aaaaff9e0c885d6269ef63f81b2cbc1c50f718374f8be9e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716932971881531924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b75037d-627f-4727-8935-8b459c226fe7,},Annotations:map[string]string{io.kubernetes.container.hash: 181f13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da18d6d5334d9662f0b6045799eb8276b1589a4cdce01aac8f18b05145d94b8e,PodSandboxId:5757eebcac3fec427adff473a5345464791a98c28d6d27a92f35ac4e3e1eeaa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716932968380358567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8cb7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3908d89-cfc6-4f1a-9aef-861aac0d3e29,},Annotations:map[string]string{io.kubernetes.container.hash: 1c6a8418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716932952735350469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc,PodSandboxId:01674a6515d0f2168d66e5d53a45a0b9da95b3f7349a36404ab03e925d034d82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716932952694001749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pnl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c2c68bc-42c2-425e-ae35-a8c07b5d5
221,},Annotations:map[string]string{io.kubernetes.container.hash: 29e7a296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622,PodSandboxId:1a57d3e3c4369791d819c0931e62e61d6cf80db256612b5ba0e89273ed65e27a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716932948909493688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379646dca49871cf019f010941906ede,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2e2ed24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b2876b1f3db864aa0e6c3000bdfb694e988ccb6c3ffdc56053e0547989c5e5,PodSandboxId:56df0f463ab2ed29cc0ec6c5168b8f676efca7b0d53aeddde04c3c7791a677eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932948904428769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c461cb87b5b1c21ce42827beca6c1ef1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c,PodSandboxId:be288742c1e8abcc613b0f3fa06841cc10d07835180f68f5b65c15201e80f32a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716932948891115178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d43b1ed5aca63e62d2ff5a84cd7e44,},Annotations:map[string]string{io.kubernetes.container.hash: e
e247d8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89,PodSandboxId:2f4260e33a3bebe1e487a02c066cc93486631d5232dd87bf02ab3d9e896353e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716932948902903465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e976cff78f1a85f2cc285af7b550e6b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48796dea-13a5-4c68-94a2-2e0ec7046bc2 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.286532071Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ce18d07-de58-45c6-8236-94f522fbf3ab name=/runtime.v1.RuntimeService/Version
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.286700990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ce18d07-de58-45c6-8236-94f522fbf3ab name=/runtime.v1.RuntimeService/Version
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.288132546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e4894ab-bdd5-42a1-be7a-0a8559c6ed5d name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.288660502Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934195288629197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e4894ab-bdd5-42a1-be7a-0a8559c6ed5d name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.289120772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be16395b-114b-4e06-b6ff-66fcf04e5b1e name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.289193362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be16395b-114b-4e06-b6ff-66fcf04e5b1e name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.289437621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6334a28f9d29a95b433857aede8e5afb904c6d2e764f1afa093ca5e7c09de09,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716932983414073366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17521a4ecdae6117bdf145c9974f8c008f247f6115ecbd86caf00c69bc3a76ab,PodSandboxId:db5c23ed716b1d80aaaaff9e0c885d6269ef63f81b2cbc1c50f718374f8be9e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716932971881531924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b75037d-627f-4727-8935-8b459c226fe7,},Annotations:map[string]string{io.kubernetes.container.hash: 181f13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da18d6d5334d9662f0b6045799eb8276b1589a4cdce01aac8f18b05145d94b8e,PodSandboxId:5757eebcac3fec427adff473a5345464791a98c28d6d27a92f35ac4e3e1eeaa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716932968380358567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8cb7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3908d89-cfc6-4f1a-9aef-861aac0d3e29,},Annotations:map[string]string{io.kubernetes.container.hash: 1c6a8418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716932952735350469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc,PodSandboxId:01674a6515d0f2168d66e5d53a45a0b9da95b3f7349a36404ab03e925d034d82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716932952694001749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pnl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c2c68bc-42c2-425e-ae35-a8c07b5d5
221,},Annotations:map[string]string{io.kubernetes.container.hash: 29e7a296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622,PodSandboxId:1a57d3e3c4369791d819c0931e62e61d6cf80db256612b5ba0e89273ed65e27a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716932948909493688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379646dca49871cf019f010941906ede,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2e2ed24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b2876b1f3db864aa0e6c3000bdfb694e988ccb6c3ffdc56053e0547989c5e5,PodSandboxId:56df0f463ab2ed29cc0ec6c5168b8f676efca7b0d53aeddde04c3c7791a677eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932948904428769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c461cb87b5b1c21ce42827beca6c1ef1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c,PodSandboxId:be288742c1e8abcc613b0f3fa06841cc10d07835180f68f5b65c15201e80f32a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716932948891115178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d43b1ed5aca63e62d2ff5a84cd7e44,},Annotations:map[string]string{io.kubernetes.container.hash: e
e247d8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89,PodSandboxId:2f4260e33a3bebe1e487a02c066cc93486631d5232dd87bf02ab3d9e896353e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716932948902903465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e976cff78f1a85f2cc285af7b550e6b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be16395b-114b-4e06-b6ff-66fcf04e5b1e name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.342090695Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4452ba12-d2ff-4bb4-9308-326e776b1c46 name=/runtime.v1.RuntimeService/Version
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.342179357Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4452ba12-d2ff-4bb4-9308-326e776b1c46 name=/runtime.v1.RuntimeService/Version
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.343379671Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b59745e6-473f-4404-b85e-23d8d9cce55b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.343957025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934195343930594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b59745e6-473f-4404-b85e-23d8d9cce55b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.344414063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=086e4b86-0824-4b74-b70d-c935cf3b872c name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.344484212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=086e4b86-0824-4b74-b70d-c935cf3b872c name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.344754600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6334a28f9d29a95b433857aede8e5afb904c6d2e764f1afa093ca5e7c09de09,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716932983414073366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17521a4ecdae6117bdf145c9974f8c008f247f6115ecbd86caf00c69bc3a76ab,PodSandboxId:db5c23ed716b1d80aaaaff9e0c885d6269ef63f81b2cbc1c50f718374f8be9e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716932971881531924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b75037d-627f-4727-8935-8b459c226fe7,},Annotations:map[string]string{io.kubernetes.container.hash: 181f13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da18d6d5334d9662f0b6045799eb8276b1589a4cdce01aac8f18b05145d94b8e,PodSandboxId:5757eebcac3fec427adff473a5345464791a98c28d6d27a92f35ac4e3e1eeaa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716932968380358567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8cb7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3908d89-cfc6-4f1a-9aef-861aac0d3e29,},Annotations:map[string]string{io.kubernetes.container.hash: 1c6a8418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716932952735350469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc,PodSandboxId:01674a6515d0f2168d66e5d53a45a0b9da95b3f7349a36404ab03e925d034d82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716932952694001749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pnl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c2c68bc-42c2-425e-ae35-a8c07b5d5
221,},Annotations:map[string]string{io.kubernetes.container.hash: 29e7a296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622,PodSandboxId:1a57d3e3c4369791d819c0931e62e61d6cf80db256612b5ba0e89273ed65e27a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716932948909493688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379646dca49871cf019f010941906ede,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2e2ed24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b2876b1f3db864aa0e6c3000bdfb694e988ccb6c3ffdc56053e0547989c5e5,PodSandboxId:56df0f463ab2ed29cc0ec6c5168b8f676efca7b0d53aeddde04c3c7791a677eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932948904428769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c461cb87b5b1c21ce42827beca6c1ef1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c,PodSandboxId:be288742c1e8abcc613b0f3fa06841cc10d07835180f68f5b65c15201e80f32a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716932948891115178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d43b1ed5aca63e62d2ff5a84cd7e44,},Annotations:map[string]string{io.kubernetes.container.hash: e
e247d8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89,PodSandboxId:2f4260e33a3bebe1e487a02c066cc93486631d5232dd87bf02ab3d9e896353e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716932948902903465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e976cff78f1a85f2cc285af7b550e6b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=086e4b86-0824-4b74-b70d-c935cf3b872c name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.385495006Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e15182f6-e8b4-4d32-a0a1-092adc0e9cbb name=/runtime.v1.RuntimeService/Version
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.385688362Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e15182f6-e8b4-4d32-a0a1-092adc0e9cbb name=/runtime.v1.RuntimeService/Version
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.387156834Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76f049e9-5d0c-42b3-9131-f3d5a71c18c2 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.387828565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934195387797226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76f049e9-5d0c-42b3-9131-f3d5a71c18c2 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.388728453Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35b721de-d506-484b-b5fd-4cbc558bd7b8 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.388860793Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35b721de-d506-484b-b5fd-4cbc558bd7b8 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:55 embed-certs-595279 crio[723]: time="2024-05-28 22:09:55.389283373Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c6334a28f9d29a95b433857aede8e5afb904c6d2e764f1afa093ca5e7c09de09,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716932983414073366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17521a4ecdae6117bdf145c9974f8c008f247f6115ecbd86caf00c69bc3a76ab,PodSandboxId:db5c23ed716b1d80aaaaff9e0c885d6269ef63f81b2cbc1c50f718374f8be9e4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716932971881531924,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1b75037d-627f-4727-8935-8b459c226fe7,},Annotations:map[string]string{io.kubernetes.container.hash: 181f13b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da18d6d5334d9662f0b6045799eb8276b1589a4cdce01aac8f18b05145d94b8e,PodSandboxId:5757eebcac3fec427adff473a5345464791a98c28d6d27a92f35ac4e3e1eeaa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716932968380358567,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8cb7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3908d89-cfc6-4f1a-9aef-861aac0d3e29,},Annotations:map[string]string{io.kubernetes.container.hash: 1c6a8418,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d,PodSandboxId:42fdb7574da76cec074950d11fbd6528dfa29a6188e70e5c07da8878169ce32d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716932952735350469,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7bf52279-1fbc-40e5-8376-992c545c55dd,},Annotations:map[string]string{io.kubernetes.container.hash: 626393a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc,PodSandboxId:01674a6515d0f2168d66e5d53a45a0b9da95b3f7349a36404ab03e925d034d82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716932952694001749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pnl5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c2c68bc-42c2-425e-ae35-a8c07b5d5
221,},Annotations:map[string]string{io.kubernetes.container.hash: 29e7a296,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622,PodSandboxId:1a57d3e3c4369791d819c0931e62e61d6cf80db256612b5ba0e89273ed65e27a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716932948909493688,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 379646dca49871cf019f010941906ede,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2e2ed24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51b2876b1f3db864aa0e6c3000bdfb694e988ccb6c3ffdc56053e0547989c5e5,PodSandboxId:56df0f463ab2ed29cc0ec6c5168b8f676efca7b0d53aeddde04c3c7791a677eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716932948904428769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c461cb87b5b1c21ce42827beca6c1ef1,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c,PodSandboxId:be288742c1e8abcc613b0f3fa06841cc10d07835180f68f5b65c15201e80f32a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716932948891115178,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8d43b1ed5aca63e62d2ff5a84cd7e44,},Annotations:map[string]string{io.kubernetes.container.hash: e
e247d8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89,PodSandboxId:2f4260e33a3bebe1e487a02c066cc93486631d5232dd87bf02ab3d9e896353e4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716932948902903465,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-595279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e976cff78f1a85f2cc285af7b550e6b,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35b721de-d506-484b-b5fd-4cbc558bd7b8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c6334a28f9d29       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   42fdb7574da76       storage-provisioner
	17521a4ecdae6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   db5c23ed716b1       busybox
	da18d6d5334d9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   5757eebcac3fe       coredns-7db6d8ff4d-8cb7b
	9c5ee70d85c3e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   42fdb7574da76       storage-provisioner
	cfb41c075cb48       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      20 minutes ago      Running             kube-proxy                1                   01674a6515d0f       kube-proxy-pnl5w
	056fb79dac858       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      20 minutes ago      Running             kube-apiserver            1                   1a57d3e3c4369       kube-apiserver-embed-certs-595279
	51b2876b1f3db       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      20 minutes ago      Running             kube-scheduler            1                   56df0f463ab2e       kube-scheduler-embed-certs-595279
	b5366e4c2bcda       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      20 minutes ago      Running             kube-controller-manager   1                   2f4260e33a3be       kube-controller-manager-embed-certs-595279
	3047accd150d9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Running             etcd                      1                   be288742c1e8a       etcd-embed-certs-595279
	
	
	==> coredns [da18d6d5334d9662f0b6045799eb8276b1589a4cdce01aac8f18b05145d94b8e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57157 - 4869 "HINFO IN 2951049221763865448.2846863702263008063. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022951582s
	
	
	==> describe nodes <==
	Name:               embed-certs-595279
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-595279
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=embed-certs-595279
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T21_40_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:40:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-595279
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 22:09:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 22:05:00 +0000   Tue, 28 May 2024 21:40:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 22:05:00 +0000   Tue, 28 May 2024 21:40:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 22:05:00 +0000   Tue, 28 May 2024 21:40:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 22:05:00 +0000   Tue, 28 May 2024 21:49:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.79
	  Hostname:    embed-certs-595279
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc8c0962a1714691aa4113fd41e50f5c
	  System UUID:                bc8c0962-a171-4691-aa41-13fd41e50f5c
	  Boot ID:                    98dbd1d5-d649-4a15-b07f-84f7ee63e3c4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-8cb7b                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-595279                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-595279             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-595279    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-pnl5w                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-595279             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-f6fz2               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-595279 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-595279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-595279 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-595279 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-595279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-595279 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node embed-certs-595279 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-595279 event: Registered Node embed-certs-595279 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-595279 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-595279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-595279 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-595279 event: Registered Node embed-certs-595279 in Controller
	
	
	==> dmesg <==
	[May28 21:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050794] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040380] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.483695] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.386823] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.573816] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.236578] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.061068] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063064] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[May28 21:49] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.149483] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.297313] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.367360] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.061369] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.031936] systemd-fstab-generator[926]: Ignoring "noauto" option for root device
	[  +4.674660] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.844754] systemd-fstab-generator[1542]: Ignoring "noauto" option for root device
	[  +4.786337] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.585587] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.070678] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c] <==
	{"level":"warn","ts":"2024-05-28T21:49:27.229246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:49:26.387908Z","time spent":"841.330801ms","remote":"127.0.0.1:44154","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4844,"request content":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-8cb7b\" "}
	{"level":"warn","ts":"2024-05-28T21:49:27.229546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.303705ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:49:27.229652Z","caller":"traceutil/trace.go:171","msg":"trace[880333716] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:626; }","duration":"113.428611ms","start":"2024-05-28T21:49:27.116214Z","end":"2024-05-28T21:49:27.229643Z","steps":["trace[880333716] 'agreement among raft nodes before linearized reading'  (duration: 113.306564ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-28T21:50:13.67084Z","caller":"traceutil/trace.go:171","msg":"trace[760187028] transaction","detail":"{read_only:false; response_revision:690; number_of_response:1; }","duration":"113.910469ms","start":"2024-05-28T21:50:13.556889Z","end":"2024-05-28T21:50:13.6708Z","steps":["trace[760187028] 'process raft request'  (duration: 113.813482ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.660268Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"537.492048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:58:52.660457Z","caller":"traceutil/trace.go:171","msg":"trace[1767165666] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1117; }","duration":"537.709156ms","start":"2024-05-28T21:58:52.122677Z","end":"2024-05-28T21:58:52.660386Z","steps":["trace[1767165666] 'range keys from in-memory index tree'  (duration: 537.439608ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.660504Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:58:52.122662Z","time spent":"537.82857ms","remote":"127.0.0.1:44154","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":29,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2024-05-28T21:58:52.660294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.467032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-28T21:58:52.660839Z","caller":"traceutil/trace.go:171","msg":"trace[1703068809] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1117; }","duration":"363.051798ms","start":"2024-05-28T21:58:52.297775Z","end":"2024-05-28T21:58:52.660827Z","steps":["trace[1703068809] 'count revisions from in-memory index tree'  (duration: 362.381392ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.660892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:58:52.297759Z","time spent":"363.122321ms","remote":"127.0.0.1:44340","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":2,"response size":31,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true "}
	{"level":"warn","ts":"2024-05-28T21:58:52.660996Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"545.50174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:58:52.661039Z","caller":"traceutil/trace.go:171","msg":"trace[470276632] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1117; }","duration":"545.558691ms","start":"2024-05-28T21:58:52.115472Z","end":"2024-05-28T21:58:52.661031Z","steps":["trace[470276632] 'range keys from in-memory index tree'  (duration: 545.376747ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.661059Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:58:52.115461Z","time spent":"545.592846ms","remote":"127.0.0.1:43940","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-05-28T21:58:52.660935Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"699.266403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-28T21:58:52.661206Z","caller":"traceutil/trace.go:171","msg":"trace[1808651470] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; response_count:0; response_revision:1117; }","duration":"699.556983ms","start":"2024-05-28T21:58:51.961641Z","end":"2024-05-28T21:58:52.661198Z","steps":["trace[1808651470] 'count revisions from in-memory index tree'  (duration: 699.12125ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.661288Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:58:51.961627Z","time spent":"699.628527ms","remote":"127.0.0.1:44476","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":31,"request content":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true "}
	{"level":"info","ts":"2024-05-28T21:59:10.848798Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":891}
	{"level":"info","ts":"2024-05-28T21:59:10.858556Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":891,"took":"9.506359ms","hash":1635858895,"current-db-size-bytes":2760704,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2760704,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-05-28T21:59:10.858653Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1635858895,"revision":891,"compact-revision":-1}
	{"level":"info","ts":"2024-05-28T22:04:10.857716Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1133}
	{"level":"info","ts":"2024-05-28T22:04:10.862716Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1133,"took":"4.33334ms","hash":552762009,"current-db-size-bytes":2760704,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1646592,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-05-28T22:04:10.862797Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":552762009,"revision":1133,"compact-revision":891}
	{"level":"info","ts":"2024-05-28T22:09:10.864393Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1376}
	{"level":"info","ts":"2024-05-28T22:09:10.86805Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1376,"took":"3.319031ms","hash":1769541365,"current-db-size-bytes":2760704,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1638400,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-05-28T22:09:10.868095Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1769541365,"revision":1376,"compact-revision":1133}
	
	
	==> kernel <==
	 22:09:55 up 21 min,  0 users,  load average: 0.12, 0.17, 0.16
	Linux embed-certs-595279 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622] <==
	I0528 22:04:13.161816       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:05:13.161691       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:05:13.161815       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:05:13.161882       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:05:13.162072       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:05:13.162142       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:05:13.163883       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:07:13.162713       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:07:13.162797       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:07:13.162811       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:07:13.164977       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:07:13.165098       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:07:13.165126       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:09:12.166514       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:09:12.166896       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0528 22:09:13.168195       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:09:13.168301       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:09:13.168340       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:09:13.168247       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:09:13.168454       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:09:13.169751       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89] <==
	E0528 22:04:25.446154       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:04:26.052815       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:04:55.451301       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:04:56.060353       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0528 22:05:23.245482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="275.671µs"
	E0528 22:05:25.456744       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:05:26.068346       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0528 22:05:35.244639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="93.64µs"
	E0528 22:05:55.462342       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:05:56.078418       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:06:25.467886       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:06:26.087467       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:06:55.472827       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:06:56.095325       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:07:25.478385       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:07:26.102165       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:07:55.484624       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:07:56.112392       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:08:25.490355       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:08:26.128293       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:08:55.499541       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:08:56.137514       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:09:25.505554       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:09:26.149640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:09:55.511918       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	
	
	==> kube-proxy [cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc] <==
	I0528 21:49:12.898688       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:49:12.914685       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.79"]
	I0528 21:49:12.946940       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 21:49:12.946990       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 21:49:12.947043       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:49:12.949651       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:49:12.949883       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:49:12.949912       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:49:12.952726       1 config.go:192] "Starting service config controller"
	I0528 21:49:12.952762       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:49:12.952787       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:49:12.952792       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:49:12.960289       1 config.go:319] "Starting node config controller"
	I0528 21:49:12.960316       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:49:13.053906       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 21:49:13.053979       1 shared_informer.go:320] Caches are synced for service config
	I0528 21:49:13.060425       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [51b2876b1f3db864aa0e6c3000bdfb694e988ccb6c3ffdc56053e0547989c5e5] <==
	I0528 21:49:09.782373       1 serving.go:380] Generated self-signed cert in-memory
	W0528 21:49:12.083203       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 21:49:12.083343       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:49:12.083417       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 21:49:12.083461       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 21:49:12.211369       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 21:49:12.217851       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:49:12.227548       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0528 21:49:12.227752       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 21:49:12.228311       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0528 21:49:12.230772       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 21:49:12.328766       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 22:07:08 embed-certs-595279 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:07:08 embed-certs-595279 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:07:15 embed-certs-595279 kubelet[933]: E0528 22:07:15.232457     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:07:30 embed-certs-595279 kubelet[933]: E0528 22:07:30.233201     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:07:42 embed-certs-595279 kubelet[933]: E0528 22:07:42.232225     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:07:54 embed-certs-595279 kubelet[933]: E0528 22:07:54.232380     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:08:08 embed-certs-595279 kubelet[933]: E0528 22:08:08.234029     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:08:08 embed-certs-595279 kubelet[933]: E0528 22:08:08.261548     933 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:08:08 embed-certs-595279 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:08:08 embed-certs-595279 kubelet[933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:08:08 embed-certs-595279 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:08:08 embed-certs-595279 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:08:23 embed-certs-595279 kubelet[933]: E0528 22:08:23.230925     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:08:35 embed-certs-595279 kubelet[933]: E0528 22:08:35.230524     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:08:48 embed-certs-595279 kubelet[933]: E0528 22:08:48.231424     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:09:00 embed-certs-595279 kubelet[933]: E0528 22:09:00.232066     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:09:08 embed-certs-595279 kubelet[933]: E0528 22:09:08.260974     933 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:09:08 embed-certs-595279 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:09:08 embed-certs-595279 kubelet[933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:09:08 embed-certs-595279 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:09:08 embed-certs-595279 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:09:14 embed-certs-595279 kubelet[933]: E0528 22:09:14.231471     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:09:25 embed-certs-595279 kubelet[933]: E0528 22:09:25.231727     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:09:40 embed-certs-595279 kubelet[933]: E0528 22:09:40.231815     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	May 28 22:09:52 embed-certs-595279 kubelet[933]: E0528 22:09:52.234321     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-f6fz2" podUID="b5e432cd-3b95-4f20-b9b3-c498512a7564"
	
	
	==> storage-provisioner [9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d] <==
	I0528 21:49:12.873498       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0528 21:49:42.877892       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c6334a28f9d29a95b433857aede8e5afb904c6d2e764f1afa093ca5e7c09de09] <==
	I0528 21:49:43.516737       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 21:49:43.533120       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 21:49:43.533405       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 21:50:00.935356       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 21:50:00.936273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"efd8c73c-2b15-4c73-812f-ad9c2ba03fd4", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-595279_e605c4d9-51a6-4a1f-8323-20929da1efa1 became leader
	I0528 21:50:00.936496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-595279_e605c4d9-51a6-4a1f-8323-20929da1efa1!
	I0528 21:50:01.041906       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-595279_e605c4d9-51a6-4a1f-8323-20929da1efa1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-595279 -n embed-certs-595279
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-595279 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-f6fz2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-595279 describe pod metrics-server-569cc877fc-f6fz2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-595279 describe pod metrics-server-569cc877fc-f6fz2: exit status 1 (62.018982ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-f6fz2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-595279 describe pod metrics-server-569cc877fc-f6fz2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (432.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (335.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-290122 -n no-preload-290122
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-05-28 22:09:31.553437483 +0000 UTC m=+6502.309516099
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-290122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-290122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-290122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-290122 -n no-preload-290122
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-290122 logs -n 25
E0528 22:09:32.641027   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-290122 logs -n 25: (1.349552091s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-290122             | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-595279            | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-499466        | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-290122                  | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-595279                 | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-499466             | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-257793                              | cert-expiration-257793       | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807140 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	|         | disable-driver-mounts-807140                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:50 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-249165  | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC | 28 May 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC |                     |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-249165       | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC | 28 May 24 22:04 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 22:08 UTC | 28 May 24 22:08 UTC |
	| start   | -p newest-cni-588598 --memory=2200 --alsologtostderr   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:08 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 22:08:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 22:08:57.188803   77191 out.go:291] Setting OutFile to fd 1 ...
	I0528 22:08:57.188905   77191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:08:57.188916   77191 out.go:304] Setting ErrFile to fd 2...
	I0528 22:08:57.188923   77191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:08:57.189104   77191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 22:08:57.189793   77191 out.go:298] Setting JSON to false
	I0528 22:08:57.190777   77191 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6680,"bootTime":1716927457,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 22:08:57.190883   77191 start.go:139] virtualization: kvm guest
	I0528 22:08:57.193647   77191 out.go:177] * [newest-cni-588598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 22:08:57.195156   77191 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 22:08:57.196510   77191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 22:08:57.195140   77191 notify.go:220] Checking for updates...
	I0528 22:08:57.199229   77191 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 22:08:57.200463   77191 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 22:08:57.201796   77191 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 22:08:57.203101   77191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 22:08:57.204704   77191 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:08:57.204843   77191 config.go:182] Loaded profile config "embed-certs-595279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:08:57.204943   77191 config.go:182] Loaded profile config "no-preload-290122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:08:57.205042   77191 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 22:08:57.241350   77191 out.go:177] * Using the kvm2 driver based on user configuration
	I0528 22:08:57.242598   77191 start.go:297] selected driver: kvm2
	I0528 22:08:57.242613   77191 start.go:901] validating driver "kvm2" against <nil>
	I0528 22:08:57.242626   77191 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 22:08:57.243350   77191 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:08:57.243417   77191 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 22:08:57.258809   77191 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 22:08:57.258855   77191 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0528 22:08:57.258900   77191 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0528 22:08:57.259156   77191 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0528 22:08:57.259187   77191 cni.go:84] Creating CNI manager for ""
	I0528 22:08:57.259198   77191 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 22:08:57.259210   77191 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0528 22:08:57.259283   77191 start.go:340] cluster config:
	{Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:08:57.259408   77191 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:08:57.261286   77191 out.go:177] * Starting "newest-cni-588598" primary control-plane node in "newest-cni-588598" cluster
	I0528 22:08:57.262498   77191 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 22:08:57.262535   77191 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 22:08:57.262543   77191 cache.go:56] Caching tarball of preloaded images
	I0528 22:08:57.262637   77191 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 22:08:57.262651   77191 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 22:08:57.262744   77191 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/config.json ...
	I0528 22:08:57.262761   77191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/config.json: {Name:mkc98c6d7bee8a312d7c73c8010de24ccf0ba8b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:08:57.262903   77191 start.go:360] acquireMachinesLock for newest-cni-588598: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 22:08:57.262937   77191 start.go:364] duration metric: took 18.195µs to acquireMachinesLock for "newest-cni-588598"
	I0528 22:08:57.262959   77191 start.go:93] Provisioning new machine with config: &{Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 22:08:57.263040   77191 start.go:125] createHost starting for "" (driver="kvm2")
	I0528 22:08:57.265300   77191 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0528 22:08:57.265444   77191 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:08:57.265490   77191 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:08:57.279981   77191 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33121
	I0528 22:08:57.280352   77191 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:08:57.280911   77191 main.go:141] libmachine: Using API Version  1
	I0528 22:08:57.280925   77191 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:08:57.281314   77191 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:08:57.281506   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:08:57.281703   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:08:57.281909   77191 start.go:159] libmachine.API.Create for "newest-cni-588598" (driver="kvm2")
	I0528 22:08:57.281957   77191 client.go:168] LocalClient.Create starting
	I0528 22:08:57.281993   77191 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem
	I0528 22:08:57.282037   77191 main.go:141] libmachine: Decoding PEM data...
	I0528 22:08:57.282060   77191 main.go:141] libmachine: Parsing certificate...
	I0528 22:08:57.282141   77191 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem
	I0528 22:08:57.282167   77191 main.go:141] libmachine: Decoding PEM data...
	I0528 22:08:57.282182   77191 main.go:141] libmachine: Parsing certificate...
	I0528 22:08:57.282207   77191 main.go:141] libmachine: Running pre-create checks...
	I0528 22:08:57.282219   77191 main.go:141] libmachine: (newest-cni-588598) Calling .PreCreateCheck
	I0528 22:08:57.282571   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetConfigRaw
	I0528 22:08:57.282954   77191 main.go:141] libmachine: Creating machine...
	I0528 22:08:57.282973   77191 main.go:141] libmachine: (newest-cni-588598) Calling .Create
	I0528 22:08:57.283144   77191 main.go:141] libmachine: (newest-cni-588598) Creating KVM machine...
	I0528 22:08:57.284183   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found existing default KVM network
	I0528 22:08:57.285737   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:57.285593   77214 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012bfa0}
	I0528 22:08:57.285784   77191 main.go:141] libmachine: (newest-cni-588598) DBG | created network xml: 
	I0528 22:08:57.285798   77191 main.go:141] libmachine: (newest-cni-588598) DBG | <network>
	I0528 22:08:57.285807   77191 main.go:141] libmachine: (newest-cni-588598) DBG |   <name>mk-newest-cni-588598</name>
	I0528 22:08:57.285820   77191 main.go:141] libmachine: (newest-cni-588598) DBG |   <dns enable='no'/>
	I0528 22:08:57.285831   77191 main.go:141] libmachine: (newest-cni-588598) DBG |   
	I0528 22:08:57.285842   77191 main.go:141] libmachine: (newest-cni-588598) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0528 22:08:57.285857   77191 main.go:141] libmachine: (newest-cni-588598) DBG |     <dhcp>
	I0528 22:08:57.285869   77191 main.go:141] libmachine: (newest-cni-588598) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0528 22:08:57.285880   77191 main.go:141] libmachine: (newest-cni-588598) DBG |     </dhcp>
	I0528 22:08:57.285897   77191 main.go:141] libmachine: (newest-cni-588598) DBG |   </ip>
	I0528 22:08:57.285909   77191 main.go:141] libmachine: (newest-cni-588598) DBG |   
	I0528 22:08:57.285920   77191 main.go:141] libmachine: (newest-cni-588598) DBG | </network>
	I0528 22:08:57.285933   77191 main.go:141] libmachine: (newest-cni-588598) DBG | 
	I0528 22:08:57.291124   77191 main.go:141] libmachine: (newest-cni-588598) DBG | trying to create private KVM network mk-newest-cni-588598 192.168.39.0/24...
	I0528 22:08:57.364504   77191 main.go:141] libmachine: (newest-cni-588598) DBG | private KVM network mk-newest-cni-588598 192.168.39.0/24 created
	I0528 22:08:57.364535   77191 main.go:141] libmachine: (newest-cni-588598) Setting up store path in /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598 ...
	I0528 22:08:57.364559   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:57.364485   77214 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 22:08:57.364580   77191 main.go:141] libmachine: (newest-cni-588598) Building disk image from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 22:08:57.364650   77191 main.go:141] libmachine: (newest-cni-588598) Downloading /home/jenkins/minikube-integration/18966-3963/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0528 22:08:57.609337   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:57.609211   77214 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa...
	I0528 22:08:57.755066   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:57.754931   77214 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/newest-cni-588598.rawdisk...
	I0528 22:08:57.755096   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Writing magic tar header
	I0528 22:08:57.755114   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Writing SSH key tar header
	I0528 22:08:57.755127   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:57.755078   77214 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598 ...
	I0528 22:08:57.755242   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598
	I0528 22:08:57.755275   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube/machines
	I0528 22:08:57.755293   77191 main.go:141] libmachine: (newest-cni-588598) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598 (perms=drwx------)
	I0528 22:08:57.755307   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 22:08:57.755319   77191 main.go:141] libmachine: (newest-cni-588598) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube/machines (perms=drwxr-xr-x)
	I0528 22:08:57.755329   77191 main.go:141] libmachine: (newest-cni-588598) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963/.minikube (perms=drwxr-xr-x)
	I0528 22:08:57.755338   77191 main.go:141] libmachine: (newest-cni-588598) Setting executable bit set on /home/jenkins/minikube-integration/18966-3963 (perms=drwxrwxr-x)
	I0528 22:08:57.755351   77191 main.go:141] libmachine: (newest-cni-588598) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0528 22:08:57.755363   77191 main.go:141] libmachine: (newest-cni-588598) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0528 22:08:57.755377   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18966-3963
	I0528 22:08:57.755391   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0528 22:08:57.755413   77191 main.go:141] libmachine: (newest-cni-588598) Creating domain...
	I0528 22:08:57.755421   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home/jenkins
	I0528 22:08:57.755429   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Checking permissions on dir: /home
	I0528 22:08:57.755449   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Skipping /home - not owner
	I0528 22:08:57.756940   77191 main.go:141] libmachine: (newest-cni-588598) define libvirt domain using xml: 
	I0528 22:08:57.756963   77191 main.go:141] libmachine: (newest-cni-588598) <domain type='kvm'>
	I0528 22:08:57.756973   77191 main.go:141] libmachine: (newest-cni-588598)   <name>newest-cni-588598</name>
	I0528 22:08:57.756981   77191 main.go:141] libmachine: (newest-cni-588598)   <memory unit='MiB'>2200</memory>
	I0528 22:08:57.756989   77191 main.go:141] libmachine: (newest-cni-588598)   <vcpu>2</vcpu>
	I0528 22:08:57.757000   77191 main.go:141] libmachine: (newest-cni-588598)   <features>
	I0528 22:08:57.757007   77191 main.go:141] libmachine: (newest-cni-588598)     <acpi/>
	I0528 22:08:57.757021   77191 main.go:141] libmachine: (newest-cni-588598)     <apic/>
	I0528 22:08:57.757049   77191 main.go:141] libmachine: (newest-cni-588598)     <pae/>
	I0528 22:08:57.757088   77191 main.go:141] libmachine: (newest-cni-588598)     
	I0528 22:08:57.757102   77191 main.go:141] libmachine: (newest-cni-588598)   </features>
	I0528 22:08:57.757111   77191 main.go:141] libmachine: (newest-cni-588598)   <cpu mode='host-passthrough'>
	I0528 22:08:57.757123   77191 main.go:141] libmachine: (newest-cni-588598)   
	I0528 22:08:57.757134   77191 main.go:141] libmachine: (newest-cni-588598)   </cpu>
	I0528 22:08:57.757145   77191 main.go:141] libmachine: (newest-cni-588598)   <os>
	I0528 22:08:57.757155   77191 main.go:141] libmachine: (newest-cni-588598)     <type>hvm</type>
	I0528 22:08:57.757174   77191 main.go:141] libmachine: (newest-cni-588598)     <boot dev='cdrom'/>
	I0528 22:08:57.757190   77191 main.go:141] libmachine: (newest-cni-588598)     <boot dev='hd'/>
	I0528 22:08:57.757199   77191 main.go:141] libmachine: (newest-cni-588598)     <bootmenu enable='no'/>
	I0528 22:08:57.757206   77191 main.go:141] libmachine: (newest-cni-588598)   </os>
	I0528 22:08:57.757213   77191 main.go:141] libmachine: (newest-cni-588598)   <devices>
	I0528 22:08:57.757221   77191 main.go:141] libmachine: (newest-cni-588598)     <disk type='file' device='cdrom'>
	I0528 22:08:57.757238   77191 main.go:141] libmachine: (newest-cni-588598)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/boot2docker.iso'/>
	I0528 22:08:57.757251   77191 main.go:141] libmachine: (newest-cni-588598)       <target dev='hdc' bus='scsi'/>
	I0528 22:08:57.757275   77191 main.go:141] libmachine: (newest-cni-588598)       <readonly/>
	I0528 22:08:57.757299   77191 main.go:141] libmachine: (newest-cni-588598)     </disk>
	I0528 22:08:57.757311   77191 main.go:141] libmachine: (newest-cni-588598)     <disk type='file' device='disk'>
	I0528 22:08:57.757323   77191 main.go:141] libmachine: (newest-cni-588598)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0528 22:08:57.757340   77191 main.go:141] libmachine: (newest-cni-588598)       <source file='/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/newest-cni-588598.rawdisk'/>
	I0528 22:08:57.757366   77191 main.go:141] libmachine: (newest-cni-588598)       <target dev='hda' bus='virtio'/>
	I0528 22:08:57.757380   77191 main.go:141] libmachine: (newest-cni-588598)     </disk>
	I0528 22:08:57.757388   77191 main.go:141] libmachine: (newest-cni-588598)     <interface type='network'>
	I0528 22:08:57.757397   77191 main.go:141] libmachine: (newest-cni-588598)       <source network='mk-newest-cni-588598'/>
	I0528 22:08:57.757406   77191 main.go:141] libmachine: (newest-cni-588598)       <model type='virtio'/>
	I0528 22:08:57.757414   77191 main.go:141] libmachine: (newest-cni-588598)     </interface>
	I0528 22:08:57.757428   77191 main.go:141] libmachine: (newest-cni-588598)     <interface type='network'>
	I0528 22:08:57.757439   77191 main.go:141] libmachine: (newest-cni-588598)       <source network='default'/>
	I0528 22:08:57.757447   77191 main.go:141] libmachine: (newest-cni-588598)       <model type='virtio'/>
	I0528 22:08:57.757458   77191 main.go:141] libmachine: (newest-cni-588598)     </interface>
	I0528 22:08:57.757469   77191 main.go:141] libmachine: (newest-cni-588598)     <serial type='pty'>
	I0528 22:08:57.757480   77191 main.go:141] libmachine: (newest-cni-588598)       <target port='0'/>
	I0528 22:08:57.757490   77191 main.go:141] libmachine: (newest-cni-588598)     </serial>
	I0528 22:08:57.757499   77191 main.go:141] libmachine: (newest-cni-588598)     <console type='pty'>
	I0528 22:08:57.757510   77191 main.go:141] libmachine: (newest-cni-588598)       <target type='serial' port='0'/>
	I0528 22:08:57.757519   77191 main.go:141] libmachine: (newest-cni-588598)     </console>
	I0528 22:08:57.757527   77191 main.go:141] libmachine: (newest-cni-588598)     <rng model='virtio'>
	I0528 22:08:57.757536   77191 main.go:141] libmachine: (newest-cni-588598)       <backend model='random'>/dev/random</backend>
	I0528 22:08:57.757543   77191 main.go:141] libmachine: (newest-cni-588598)     </rng>
	I0528 22:08:57.757551   77191 main.go:141] libmachine: (newest-cni-588598)     
	I0528 22:08:57.757557   77191 main.go:141] libmachine: (newest-cni-588598)     
	I0528 22:08:57.757564   77191 main.go:141] libmachine: (newest-cni-588598)   </devices>
	I0528 22:08:57.757570   77191 main.go:141] libmachine: (newest-cni-588598) </domain>
	I0528 22:08:57.757579   77191 main.go:141] libmachine: (newest-cni-588598) 
	I0528 22:08:57.762195   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:fb:4b:f7 in network default
	I0528 22:08:57.762790   77191 main.go:141] libmachine: (newest-cni-588598) Ensuring networks are active...
	I0528 22:08:57.762813   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:08:57.763540   77191 main.go:141] libmachine: (newest-cni-588598) Ensuring network default is active
	I0528 22:08:57.763975   77191 main.go:141] libmachine: (newest-cni-588598) Ensuring network mk-newest-cni-588598 is active
	I0528 22:08:57.764617   77191 main.go:141] libmachine: (newest-cni-588598) Getting domain xml...
	I0528 22:08:57.765473   77191 main.go:141] libmachine: (newest-cni-588598) Creating domain...
	I0528 22:08:59.027894   77191 main.go:141] libmachine: (newest-cni-588598) Waiting to get IP...
	I0528 22:08:59.028655   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:08:59.029051   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:08:59.029098   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:59.029032   77214 retry.go:31] will retry after 285.280112ms: waiting for machine to come up
	I0528 22:08:59.315531   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:08:59.315990   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:08:59.316023   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:59.315931   77214 retry.go:31] will retry after 350.098141ms: waiting for machine to come up
	I0528 22:08:59.667279   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:08:59.667732   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:08:59.667764   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:08:59.667689   77214 retry.go:31] will retry after 456.545841ms: waiting for machine to come up
	I0528 22:09:00.126444   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:00.126951   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:00.126974   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:00.126893   77214 retry.go:31] will retry after 385.534431ms: waiting for machine to come up
	I0528 22:09:00.514526   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:00.514990   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:00.515017   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:00.514958   77214 retry.go:31] will retry after 593.263865ms: waiting for machine to come up
	I0528 22:09:01.110012   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:01.110500   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:01.110534   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:01.110435   77214 retry.go:31] will retry after 594.648578ms: waiting for machine to come up
	I0528 22:09:01.706760   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:01.707215   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:01.707277   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:01.707197   77214 retry.go:31] will retry after 877.470046ms: waiting for machine to come up
	I0528 22:09:02.586444   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:02.586845   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:02.586932   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:02.586839   77214 retry.go:31] will retry after 1.23527304s: waiting for machine to come up
	I0528 22:09:03.823483   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:03.824008   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:03.824037   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:03.823960   77214 retry.go:31] will retry after 1.43309336s: waiting for machine to come up
	I0528 22:09:05.258858   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:05.259343   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:05.259366   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:05.259276   77214 retry.go:31] will retry after 2.220590768s: waiting for machine to come up
	I0528 22:09:07.481727   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:07.482296   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:07.482328   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:07.482212   77214 retry.go:31] will retry after 2.56599614s: waiting for machine to come up
	I0528 22:09:10.051062   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:10.051694   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:10.051729   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:10.051617   77214 retry.go:31] will retry after 3.175068668s: waiting for machine to come up
	I0528 22:09:13.228105   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:13.228532   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:13.228559   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:13.228478   77214 retry.go:31] will retry after 2.777270754s: waiting for machine to come up
	I0528 22:09:16.009327   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:16.009741   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:09:16.009784   77191 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:09:16.009695   77214 retry.go:31] will retry after 4.889591222s: waiting for machine to come up
	I0528 22:09:20.903488   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:20.903995   77191 main.go:141] libmachine: (newest-cni-588598) Found IP for machine: 192.168.39.57
	I0528 22:09:20.904023   77191 main.go:141] libmachine: (newest-cni-588598) Reserving static IP address...
	I0528 22:09:20.904037   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has current primary IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:20.904350   77191 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find host DHCP lease matching {name: "newest-cni-588598", mac: "52:54:00:a4:df:c4", ip: "192.168.39.57"} in network mk-newest-cni-588598
	I0528 22:09:20.982294   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Getting to WaitForSSH function...
	I0528 22:09:20.982319   77191 main.go:141] libmachine: (newest-cni-588598) Reserved static IP address: 192.168.39.57
	I0528 22:09:20.982344   77191 main.go:141] libmachine: (newest-cni-588598) Waiting for SSH to be available...
	I0528 22:09:20.984946   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:20.985323   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:20.985349   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:20.985533   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Using SSH client type: external
	I0528 22:09:20.985558   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa (-rw-------)
	I0528 22:09:20.985597   77191 main.go:141] libmachine: (newest-cni-588598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 22:09:20.985613   77191 main.go:141] libmachine: (newest-cni-588598) DBG | About to run SSH command:
	I0528 22:09:20.985641   77191 main.go:141] libmachine: (newest-cni-588598) DBG | exit 0
	I0528 22:09:21.113932   77191 main.go:141] libmachine: (newest-cni-588598) DBG | SSH cmd err, output: <nil>: 
	I0528 22:09:21.114194   77191 main.go:141] libmachine: (newest-cni-588598) KVM machine creation complete!
	I0528 22:09:21.114515   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetConfigRaw
	I0528 22:09:21.115045   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:21.115210   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:21.115374   77191 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0528 22:09:21.115387   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:09:21.116779   77191 main.go:141] libmachine: Detecting operating system of created instance...
	I0528 22:09:21.116792   77191 main.go:141] libmachine: Waiting for SSH to be available...
	I0528 22:09:21.116797   77191 main.go:141] libmachine: Getting to WaitForSSH function...
	I0528 22:09:21.116803   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.119454   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.119824   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.119850   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.120014   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:21.120207   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.120371   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.120525   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:21.120673   77191 main.go:141] libmachine: Using SSH client type: native
	I0528 22:09:21.120912   77191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:09:21.120928   77191 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0528 22:09:21.225310   77191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 22:09:21.225350   77191 main.go:141] libmachine: Detecting the provisioner...
	I0528 22:09:21.225362   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.228359   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.228634   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.228661   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.228823   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:21.229063   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.229220   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.229417   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:21.229593   77191 main.go:141] libmachine: Using SSH client type: native
	I0528 22:09:21.229777   77191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:09:21.229791   77191 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0528 22:09:21.338855   77191 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0528 22:09:21.338924   77191 main.go:141] libmachine: found compatible host: buildroot
	I0528 22:09:21.338937   77191 main.go:141] libmachine: Provisioning with buildroot...
	I0528 22:09:21.338945   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:09:21.339214   77191 buildroot.go:166] provisioning hostname "newest-cni-588598"
	I0528 22:09:21.339241   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:09:21.339479   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.342126   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.342482   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.342512   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.342620   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:21.342805   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.342959   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.343100   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:21.343235   77191 main.go:141] libmachine: Using SSH client type: native
	I0528 22:09:21.343409   77191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:09:21.343426   77191 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-588598 && echo "newest-cni-588598" | sudo tee /etc/hostname
	I0528 22:09:21.464742   77191 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-588598
	
	I0528 22:09:21.464767   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.467630   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.467969   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.468007   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.468095   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:21.468279   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.468429   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.468579   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:21.468767   77191 main.go:141] libmachine: Using SSH client type: native
	I0528 22:09:21.468978   77191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:09:21.468996   77191 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-588598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-588598/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-588598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 22:09:21.587539   77191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 22:09:21.587566   77191 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 22:09:21.587605   77191 buildroot.go:174] setting up certificates
	I0528 22:09:21.587613   77191 provision.go:84] configureAuth start
	I0528 22:09:21.587621   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:09:21.587911   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:09:21.590786   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.591127   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.591167   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.591300   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.593582   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.593894   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.593922   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.594077   77191 provision.go:143] copyHostCerts
	I0528 22:09:21.594160   77191 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 22:09:21.594176   77191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 22:09:21.594262   77191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 22:09:21.594384   77191 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 22:09:21.594396   77191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 22:09:21.594434   77191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 22:09:21.594522   77191 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 22:09:21.594539   77191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 22:09:21.594578   77191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 22:09:21.594658   77191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.newest-cni-588598 san=[127.0.0.1 192.168.39.57 localhost minikube newest-cni-588598]
	I0528 22:09:21.670605   77191 provision.go:177] copyRemoteCerts
	I0528 22:09:21.670652   77191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 22:09:21.670672   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.673616   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.673969   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.673996   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.674190   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:21.674352   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.674527   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:21.674637   77191 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:09:21.760722   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 22:09:21.787677   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0528 22:09:21.813476   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 22:09:21.839238   77191 provision.go:87] duration metric: took 251.615231ms to configureAuth
	I0528 22:09:21.839263   77191 buildroot.go:189] setting minikube options for container-runtime
	I0528 22:09:21.839468   77191 config.go:182] Loaded profile config "newest-cni-588598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:09:21.839536   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:21.842066   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.842445   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:21.842485   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:21.842621   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:21.842801   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.842983   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:21.843163   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:21.843345   77191 main.go:141] libmachine: Using SSH client type: native
	I0528 22:09:21.843500   77191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:09:21.843517   77191 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 22:09:22.109265   77191 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 22:09:22.109294   77191 main.go:141] libmachine: Checking connection to Docker...
	I0528 22:09:22.109313   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetURL
	I0528 22:09:22.110745   77191 main.go:141] libmachine: (newest-cni-588598) DBG | Using libvirt version 6000000
	I0528 22:09:22.113206   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.113575   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.113600   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.113845   77191 main.go:141] libmachine: Docker is up and running!
	I0528 22:09:22.113864   77191 main.go:141] libmachine: Reticulating splines...
	I0528 22:09:22.113871   77191 client.go:171] duration metric: took 24.83190304s to LocalClient.Create
	I0528 22:09:22.113897   77191 start.go:167] duration metric: took 24.831990112s to libmachine.API.Create "newest-cni-588598"
	I0528 22:09:22.113910   77191 start.go:293] postStartSetup for "newest-cni-588598" (driver="kvm2")
	I0528 22:09:22.113922   77191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 22:09:22.113940   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:22.114157   77191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 22:09:22.114179   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:22.116516   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.116875   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.116912   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.117073   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:22.117255   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:22.117447   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:22.117615   77191 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:09:22.200715   77191 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 22:09:22.205294   77191 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 22:09:22.205318   77191 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 22:09:22.205374   77191 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 22:09:22.205463   77191 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 22:09:22.205546   77191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 22:09:22.215225   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 22:09:22.240466   77191 start.go:296] duration metric: took 126.546231ms for postStartSetup
	I0528 22:09:22.240522   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetConfigRaw
	I0528 22:09:22.241098   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:09:22.243958   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.244319   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.244335   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.244627   77191 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/config.json ...
	I0528 22:09:22.244785   77191 start.go:128] duration metric: took 24.981737676s to createHost
	I0528 22:09:22.244805   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:22.247128   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.247519   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.247548   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.247668   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:22.247843   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:22.247997   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:22.248116   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:22.248331   77191 main.go:141] libmachine: Using SSH client type: native
	I0528 22:09:22.248532   77191 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:09:22.248547   77191 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 22:09:22.354799   77191 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716934162.331502714
	
	I0528 22:09:22.354829   77191 fix.go:216] guest clock: 1716934162.331502714
	I0528 22:09:22.354839   77191 fix.go:229] Guest: 2024-05-28 22:09:22.331502714 +0000 UTC Remote: 2024-05-28 22:09:22.24479663 +0000 UTC m=+25.089878355 (delta=86.706084ms)
	I0528 22:09:22.354894   77191 fix.go:200] guest clock delta is within tolerance: 86.706084ms
	I0528 22:09:22.354921   77191 start.go:83] releasing machines lock for "newest-cni-588598", held for 25.091972651s
	I0528 22:09:22.354952   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:22.355257   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:09:22.358210   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.358600   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.358629   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.358790   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:22.359286   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:22.359446   77191 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:09:22.359540   77191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 22:09:22.359574   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:22.359608   77191 ssh_runner.go:195] Run: cat /version.json
	I0528 22:09:22.359630   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:09:22.362337   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.362567   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.362677   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.362707   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.362902   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:22.362980   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:22.363019   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:22.363088   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:22.363286   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:09:22.363301   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:22.363471   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:09:22.363480   77191 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:09:22.363621   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:09:22.363766   77191 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:09:22.443232   77191 ssh_runner.go:195] Run: systemctl --version
	I0528 22:09:22.479460   77191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 22:09:22.649426   77191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 22:09:22.656514   77191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 22:09:22.656570   77191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 22:09:22.672651   77191 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 22:09:22.672676   77191 start.go:494] detecting cgroup driver to use...
	I0528 22:09:22.672747   77191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 22:09:22.695626   77191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 22:09:22.710816   77191 docker.go:217] disabling cri-docker service (if available) ...
	I0528 22:09:22.710901   77191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 22:09:22.724719   77191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 22:09:22.740781   77191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 22:09:22.862590   77191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 22:09:23.030745   77191 docker.go:233] disabling docker service ...
	I0528 22:09:23.030821   77191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 22:09:23.046614   77191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 22:09:23.060615   77191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 22:09:23.183429   77191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 22:09:23.306112   77191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 22:09:23.321381   77191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 22:09:23.341737   77191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 22:09:23.341819   77191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.353025   77191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 22:09:23.353084   77191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.365142   77191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.376442   77191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.389445   77191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 22:09:23.402881   77191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.414972   77191 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.434796   77191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:09:23.447421   77191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 22:09:23.457280   77191 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 22:09:23.457349   77191 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 22:09:23.470855   77191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 22:09:23.481523   77191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:09:23.612297   77191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 22:09:23.757883   77191 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 22:09:23.757962   77191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 22:09:23.762952   77191 start.go:562] Will wait 60s for crictl version
	I0528 22:09:23.763006   77191 ssh_runner.go:195] Run: which crictl
	I0528 22:09:23.767408   77191 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 22:09:23.812782   77191 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 22:09:23.812860   77191 ssh_runner.go:195] Run: crio --version
	I0528 22:09:23.846562   77191 ssh_runner.go:195] Run: crio --version
	I0528 22:09:23.877871   77191 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 22:09:23.879118   77191 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:09:23.882110   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:23.882431   77191 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:09:11 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:09:23.882455   77191 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:09:23.882794   77191 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 22:09:23.887855   77191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:09:23.902912   77191 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0528 22:09:23.904225   77191 kubeadm.go:877] updating cluster {Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 22:09:23.904382   77191 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 22:09:23.904467   77191 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 22:09:23.937380   77191 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0528 22:09:23.937438   77191 ssh_runner.go:195] Run: which lz4
	I0528 22:09:23.941391   77191 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 22:09:23.945567   77191 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 22:09:23.945591   77191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0528 22:09:25.398547   77191 crio.go:462] duration metric: took 1.457207803s to copy over tarball
	I0528 22:09:25.398641   77191 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.230342198Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934172230313063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=662d58b1-b9ff-4fe6-bfda-1fc14be522a3 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.231014912Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3cbcb4d6-5b0c-4614-9c9f-2279fbebb42b name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.231152717Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3cbcb4d6-5b0c-4614-9c9f-2279fbebb42b name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.231573351Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716933060042350178,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6293b218a7f4ac452285cb7a65b1cc98ac1fbfb6c10c4e590c6dc8f7e3d295,PodSandboxId:7b2a6ef244bb4e90bffdd1a1d60935ce85eb6c6a064b196112c47571d4693a2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716933039907350972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b912c7e-7dc0-406d-934e-56f8c76293b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3be541bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b,PodSandboxId:9ef91c405fbc6f4838b947b9c9f47db5c1422301c1fbbce84edd53778bdbcd51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933036937140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a084dfb5-5818-4244-9052-a9f861b45617,},Annotations:map[string]string{io.kubernetes.container.hash: fc6b3bd4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910,PodSandboxId:2914453aecb392789a4523498032d124e3ee272d48cf1fdf6f6ee55a4f928f7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716933029192948487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w45qh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f962c73d-872d-4f78-a6
28-267cb0be49bb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a301e43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716933029185108588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac
36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af,PodSandboxId:37c3f2a6893a0ff6fe9f38f34348d82cd4cb94bf3fa884519ae0a93a6a250a19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933025526288076,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c52ccd26e857fd3c5eca30f8dbd103f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc,PodSandboxId:24cc59eab1e5a3ec0585d385ca7d0de4c8f23ca6532ca7464cf28ba6ffa528db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933025557677625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0bcb2cd3d47aad67c2dd098b794a5d7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c73b998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a,PodSandboxId:b15ca0befb6f4a1b46904e62c844e9cf4a9cb70e55a6ae50f78b4126561ac5f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933025526640126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3357f39709a332110267d0f3d64c4674,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e,PodSandboxId:365b73e6cf561c95c62b4c8a0e57b4a49f788144f89a8c6e304cad545934fe77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933025455688727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5222ebcf86d1db94279a588215feff43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 5e86551b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3cbcb4d6-5b0c-4614-9c9f-2279fbebb42b name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.276608561Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0acfb3b4-f030-4450-a33b-fb95e3d429cb name=/runtime.v1.RuntimeService/Version
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.276727901Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0acfb3b4-f030-4450-a33b-fb95e3d429cb name=/runtime.v1.RuntimeService/Version
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.278514012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a5e1a2e-5be7-44d3-8b20-26a1094204f3 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.279233010Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934172279203757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a5e1a2e-5be7-44d3-8b20-26a1094204f3 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.279959479Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ba36510-7e8f-4e69-ba1b-005fd3f51b42 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.280024880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ba36510-7e8f-4e69-ba1b-005fd3f51b42 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.280221910Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716933060042350178,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6293b218a7f4ac452285cb7a65b1cc98ac1fbfb6c10c4e590c6dc8f7e3d295,PodSandboxId:7b2a6ef244bb4e90bffdd1a1d60935ce85eb6c6a064b196112c47571d4693a2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716933039907350972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b912c7e-7dc0-406d-934e-56f8c76293b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3be541bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b,PodSandboxId:9ef91c405fbc6f4838b947b9c9f47db5c1422301c1fbbce84edd53778bdbcd51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933036937140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a084dfb5-5818-4244-9052-a9f861b45617,},Annotations:map[string]string{io.kubernetes.container.hash: fc6b3bd4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910,PodSandboxId:2914453aecb392789a4523498032d124e3ee272d48cf1fdf6f6ee55a4f928f7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716933029192948487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w45qh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f962c73d-872d-4f78-a6
28-267cb0be49bb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a301e43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716933029185108588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac
36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af,PodSandboxId:37c3f2a6893a0ff6fe9f38f34348d82cd4cb94bf3fa884519ae0a93a6a250a19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933025526288076,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c52ccd26e857fd3c5eca30f8dbd103f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc,PodSandboxId:24cc59eab1e5a3ec0585d385ca7d0de4c8f23ca6532ca7464cf28ba6ffa528db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933025557677625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0bcb2cd3d47aad67c2dd098b794a5d7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c73b998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a,PodSandboxId:b15ca0befb6f4a1b46904e62c844e9cf4a9cb70e55a6ae50f78b4126561ac5f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933025526640126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3357f39709a332110267d0f3d64c4674,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e,PodSandboxId:365b73e6cf561c95c62b4c8a0e57b4a49f788144f89a8c6e304cad545934fe77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933025455688727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5222ebcf86d1db94279a588215feff43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 5e86551b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ba36510-7e8f-4e69-ba1b-005fd3f51b42 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.320576938Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a99a0b2a-becd-49e1-93d5-00a967aa9bd4 name=/runtime.v1.RuntimeService/Version
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.320648121Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a99a0b2a-becd-49e1-93d5-00a967aa9bd4 name=/runtime.v1.RuntimeService/Version
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.322145625Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfb83916-00a5-4998-8d07-7ecec587312f name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.322836439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934172322801802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfb83916-00a5-4998-8d07-7ecec587312f name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.323366048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7eed8a2-f722-49e4-9ee0-8e425156742b name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.323465639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7eed8a2-f722-49e4-9ee0-8e425156742b name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.323699031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716933060042350178,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6293b218a7f4ac452285cb7a65b1cc98ac1fbfb6c10c4e590c6dc8f7e3d295,PodSandboxId:7b2a6ef244bb4e90bffdd1a1d60935ce85eb6c6a064b196112c47571d4693a2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716933039907350972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b912c7e-7dc0-406d-934e-56f8c76293b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3be541bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b,PodSandboxId:9ef91c405fbc6f4838b947b9c9f47db5c1422301c1fbbce84edd53778bdbcd51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933036937140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a084dfb5-5818-4244-9052-a9f861b45617,},Annotations:map[string]string{io.kubernetes.container.hash: fc6b3bd4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910,PodSandboxId:2914453aecb392789a4523498032d124e3ee272d48cf1fdf6f6ee55a4f928f7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716933029192948487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w45qh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f962c73d-872d-4f78-a6
28-267cb0be49bb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a301e43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716933029185108588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac
36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af,PodSandboxId:37c3f2a6893a0ff6fe9f38f34348d82cd4cb94bf3fa884519ae0a93a6a250a19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933025526288076,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c52ccd26e857fd3c5eca30f8dbd103f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc,PodSandboxId:24cc59eab1e5a3ec0585d385ca7d0de4c8f23ca6532ca7464cf28ba6ffa528db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933025557677625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0bcb2cd3d47aad67c2dd098b794a5d7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c73b998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a,PodSandboxId:b15ca0befb6f4a1b46904e62c844e9cf4a9cb70e55a6ae50f78b4126561ac5f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933025526640126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3357f39709a332110267d0f3d64c4674,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e,PodSandboxId:365b73e6cf561c95c62b4c8a0e57b4a49f788144f89a8c6e304cad545934fe77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933025455688727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5222ebcf86d1db94279a588215feff43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 5e86551b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7eed8a2-f722-49e4-9ee0-8e425156742b name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.359495244Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1269f3a0-8ef2-4d94-b883-cdb412682d9a name=/runtime.v1.RuntimeService/Version
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.359675563Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1269f3a0-8ef2-4d94-b883-cdb412682d9a name=/runtime.v1.RuntimeService/Version
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.362036280Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a51194ab-f6b5-48cb-a1da-5aefaf4463dc name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.362679496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934172362646172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a51194ab-f6b5-48cb-a1da-5aefaf4463dc name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.365750249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5012176b-029c-4989-9f39-7ae803d3b7c3 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.365873241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5012176b-029c-4989-9f39-7ae803d3b7c3 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:09:32 no-preload-290122 crio[732]: time="2024-05-28 22:09:32.366138685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1716933060042350178,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e6293b218a7f4ac452285cb7a65b1cc98ac1fbfb6c10c4e590c6dc8f7e3d295,PodSandboxId:7b2a6ef244bb4e90bffdd1a1d60935ce85eb6c6a064b196112c47571d4693a2f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1716933039907350972,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b912c7e-7dc0-406d-934e-56f8c76293b4,},Annotations:map[string]string{io.kubernetes.container.hash: 3be541bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b,PodSandboxId:9ef91c405fbc6f4838b947b9c9f47db5c1422301c1fbbce84edd53778bdbcd51,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933036937140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fmk2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a084dfb5-5818-4244-9052-a9f861b45617,},Annotations:map[string]string{io.kubernetes.container.hash: fc6b3bd4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910,PodSandboxId:2914453aecb392789a4523498032d124e3ee272d48cf1fdf6f6ee55a4f928f7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1716933029192948487,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w45qh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f962c73d-872d-4f78-a6
28-267cb0be49bb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a301e43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0,PodSandboxId:c7ab7d7de21b78a31a438ae8daa51a9da170d56c51dd669a6c88447ed7be4d4e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1716933029185108588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc1a5463-05e0-4213-a7a8-2dd7f355ac
36,},Annotations:map[string]string{io.kubernetes.container.hash: 6f988c16,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af,PodSandboxId:37c3f2a6893a0ff6fe9f38f34348d82cd4cb94bf3fa884519ae0a93a6a250a19,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933025526288076,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c52ccd26e857fd3c5eca30f8dbd103f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc,PodSandboxId:24cc59eab1e5a3ec0585d385ca7d0de4c8f23ca6532ca7464cf28ba6ffa528db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933025557677625,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0bcb2cd3d47aad67c2dd098b794a5d7,},Annotations:map[strin
g]string{io.kubernetes.container.hash: c73b998,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a,PodSandboxId:b15ca0befb6f4a1b46904e62c844e9cf4a9cb70e55a6ae50f78b4126561ac5f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933025526640126,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3357f39709a332110267d0f3d64c4674,},Annotati
ons:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e,PodSandboxId:365b73e6cf561c95c62b4c8a0e57b4a49f788144f89a8c6e304cad545934fe77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933025455688727,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-290122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5222ebcf86d1db94279a588215feff43,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 5e86551b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5012176b-029c-4989-9f39-7ae803d3b7c3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e80571418c7d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       3                   c7ab7d7de21b7       storage-provisioner
	6e6293b218a7f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   7b2a6ef244bb4       busybox
	ebc2314ec3dcb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   1                   9ef91c405fbc6       coredns-7db6d8ff4d-fmk2h
	9a787e20b35dd       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      19 minutes ago      Running             kube-proxy                1                   2914453aecb39       kube-proxy-w45qh
	912c92cb728e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       2                   c7ab7d7de21b7       storage-provisioner
	42608327556ea       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      19 minutes ago      Running             kube-apiserver            1                   24cc59eab1e5a       kube-apiserver-no-preload-290122
	e1f2c88b18006       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      19 minutes ago      Running             kube-controller-manager   1                   b15ca0befb6f4       kube-controller-manager-no-preload-290122
	e3d4c1df4c10f       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      19 minutes ago      Running             kube-scheduler            1                   37c3f2a6893a0       kube-scheduler-no-preload-290122
	48e5c5e140f93       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      19 minutes ago      Running             etcd                      1                   365b73e6cf561       etcd-no-preload-290122
	
	
	==> coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50005 - 31532 "HINFO IN 7776364950442401203.7578220013407324169. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00938918s
	
	
	==> describe nodes <==
	Name:               no-preload-290122
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-290122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=no-preload-290122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T21_40_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:40:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-290122
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 22:09:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 22:06:17 +0000   Tue, 28 May 2024 21:40:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 22:06:17 +0000   Tue, 28 May 2024 21:40:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 22:06:17 +0000   Tue, 28 May 2024 21:40:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 22:06:17 +0000   Tue, 28 May 2024 21:50:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.138
	  Hostname:    no-preload-290122
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca6de69c011242d2b09e549a99f050f4
	  System UUID:                ca6de69c-0112-42d2-b09e-549a99f050f4
	  Boot ID:                    9b840b8d-7c5d-4481-b7a6-bca6f3fd097a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-fmk2h                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-290122                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-no-preload-290122             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-no-preload-290122    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-w45qh                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-290122             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-j2khc              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node no-preload-290122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node no-preload-290122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node no-preload-290122 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node no-preload-290122 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-290122 event: Registered Node no-preload-290122 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-290122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-290122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-290122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-290122 event: Registered Node no-preload-290122 in Controller
	
	
	==> dmesg <==
	[May28 21:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060137] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042512] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.731246] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.451621] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.482381] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May28 21:50] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.062272] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067999] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.196619] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.152020] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.297350] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[ +16.239014] systemd-fstab-generator[1238]: Ignoring "noauto" option for root device
	[  +0.068653] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.363733] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device
	[  +4.597179] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.452823] systemd-fstab-generator[1979]: Ignoring "noauto" option for root device
	[  +3.327825] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.058167] kauditd_printk_skb: 35 callbacks suppressed
	
	
	==> etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] <==
	{"level":"info","ts":"2024-05-28T21:50:27.279587Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:50:27.279686Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:50:27.282117Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.138:2379"}
	{"level":"info","ts":"2024-05-28T21:50:27.282208Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T21:50:27.282239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T21:50:27.284877Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-28T21:58:52.026142Z","caller":"traceutil/trace.go:171","msg":"trace[2018511596] transaction","detail":"{read_only:false; response_revision:1002; number_of_response:1; }","duration":"211.347913ms","start":"2024-05-28T21:58:51.814738Z","end":"2024-05-28T21:58:52.026086Z","steps":["trace[2018511596] 'process raft request'  (duration: 211.219863ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.929515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"639.014565ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T21:58:52.930179Z","caller":"traceutil/trace.go:171","msg":"trace[1851811743] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1002; }","duration":"639.861959ms","start":"2024-05-28T21:58:52.290291Z","end":"2024-05-28T21:58:52.930153Z","steps":["trace[1851811743] 'range keys from in-memory index tree'  (duration: 638.969441ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T21:58:52.930285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T21:58:52.290277Z","time spent":"639.975101ms","remote":"127.0.0.1:44198","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"info","ts":"2024-05-28T22:00:27.33975Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":835}
	{"level":"info","ts":"2024-05-28T22:00:27.351663Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":835,"took":"11.508806ms","hash":3359253185,"current-db-size-bytes":2629632,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2629632,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-05-28T22:00:27.351734Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3359253185,"revision":835,"compact-revision":-1}
	{"level":"info","ts":"2024-05-28T22:05:27.349153Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1077}
	{"level":"info","ts":"2024-05-28T22:05:27.353003Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1077,"took":"3.357863ms","hash":2028915027,"current-db-size-bytes":2629632,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-05-28T22:05:27.35309Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2028915027,"revision":1077,"compact-revision":835}
	{"level":"warn","ts":"2024-05-28T22:09:29.527529Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.327491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T22:09:29.528086Z","caller":"traceutil/trace.go:171","msg":"trace[1620237469] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1516; }","duration":"157.91448ms","start":"2024-05-28T22:09:29.3701Z","end":"2024-05-28T22:09:29.528014Z","steps":["trace[1620237469] 'range keys from in-memory index tree'  (duration: 157.219931ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T22:09:30.309133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.419056ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13288873185364721956 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.138\" mod_revision:1509 > success:<request_put:<key:\"/registry/masterleases/192.168.50.138\" value_size:67 lease:4065501148509946146 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.138\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-05-28T22:09:30.309266Z","caller":"traceutil/trace.go:171","msg":"trace[1717428399] transaction","detail":"{read_only:false; response_revision:1517; number_of_response:1; }","duration":"259.861644ms","start":"2024-05-28T22:09:30.049393Z","end":"2024-05-28T22:09:30.309255Z","steps":["trace[1717428399] 'process raft request'  (duration: 125.50765ms)","trace[1717428399] 'compare'  (duration: 133.313812ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T22:09:30.847797Z","caller":"traceutil/trace.go:171","msg":"trace[308096097] linearizableReadLoop","detail":"{readStateIndex:1792; appliedIndex:1791; }","duration":"132.105421ms","start":"2024-05-28T22:09:30.715674Z","end":"2024-05-28T22:09:30.84778Z","steps":["trace[308096097] 'read index received'  (duration: 131.927724ms)","trace[308096097] 'applied index is now lower than readState.Index'  (duration: 177.175µs)"],"step_count":2}
	{"level":"info","ts":"2024-05-28T22:09:30.851667Z","caller":"traceutil/trace.go:171","msg":"trace[505881864] transaction","detail":"{read_only:false; response_revision:1518; number_of_response:1; }","duration":"315.814267ms","start":"2024-05-28T22:09:30.535794Z","end":"2024-05-28T22:09:30.851608Z","steps":["trace[505881864] 'process raft request'  (duration: 311.888275ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T22:09:30.851933Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T22:09:30.535773Z","time spent":"316.023142ms","remote":"127.0.0.1:44280","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-sfhqkwu4xbsu7njhlrfcvgjexq\" mod_revision:1510 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-sfhqkwu4xbsu7njhlrfcvgjexq\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-sfhqkwu4xbsu7njhlrfcvgjexq\" > >"}
	{"level":"warn","ts":"2024-05-28T22:09:30.847931Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.28013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T22:09:30.852539Z","caller":"traceutil/trace.go:171","msg":"trace[900687923] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1518; }","duration":"136.917841ms","start":"2024-05-28T22:09:30.715604Z","end":"2024-05-28T22:09:30.852522Z","steps":["trace[900687923] 'agreement among raft nodes before linearized reading'  (duration: 132.264324ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:09:32 up 19 min,  0 users,  load average: 0.22, 0.21, 0.18
	Linux no-preload-290122 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] <==
	I0528 22:03:29.671467       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:05:28.675576       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:05:28.675682       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0528 22:05:29.676264       1 handler_proxy.go:93] no RequestInfo found in the context
	W0528 22:05:29.676268       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:05:29.676587       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:05:29.676621       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0528 22:05:29.676564       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:05:29.678642       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:06:29.677027       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:06:29.677296       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:06:29.677440       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:06:29.679322       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:06:29.679387       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:06:29.679452       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:08:29.678037       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:08:29.678134       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:08:29.678143       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:08:29.680384       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:08:29.680573       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:08:29.680602       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] <==
	I0528 22:03:42.705296       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:04:12.201834       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:04:12.714657       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:04:42.206359       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:04:42.723679       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:05:12.212335       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:05:12.731936       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:05:42.220480       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:05:42.738955       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:06:12.227085       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:06:12.747872       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0528 22:06:32.866769       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="655.874µs"
	E0528 22:06:42.231649       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:06:42.755754       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0528 22:06:45.863781       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="91.347µs"
	E0528 22:07:12.237921       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:07:12.763870       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:07:42.243030       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:07:42.772597       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:08:12.250182       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:08:12.785889       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:08:42.256076       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:08:42.793203       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:09:12.260846       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:09:12.801379       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] <==
	I0528 21:50:29.368066       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:50:29.378278       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.138"]
	I0528 21:50:29.419761       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 21:50:29.419842       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 21:50:29.419870       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:50:29.422847       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:50:29.423078       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:50:29.423282       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:50:29.424941       1 config.go:192] "Starting service config controller"
	I0528 21:50:29.424999       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:50:29.425039       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:50:29.425060       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:50:29.426982       1 config.go:319] "Starting node config controller"
	I0528 21:50:29.427019       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:50:29.525801       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 21:50:29.525838       1 shared_informer.go:320] Caches are synced for service config
	I0528 21:50:29.527305       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] <==
	I0528 21:50:26.649713       1 serving.go:380] Generated self-signed cert in-memory
	W0528 21:50:28.604721       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 21:50:28.604764       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:50:28.604774       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 21:50:28.604780       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 21:50:28.668954       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 21:50:28.668999       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:50:28.672720       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0528 21:50:28.672811       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0528 21:50:28.672838       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 21:50:28.672857       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 21:50:28.773950       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 22:07:24 no-preload-290122 kubelet[1367]: E0528 22:07:24.865223    1367 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:07:24 no-preload-290122 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:07:24 no-preload-290122 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:07:24 no-preload-290122 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:07:24 no-preload-290122 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:07:37 no-preload-290122 kubelet[1367]: E0528 22:07:37.848357    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:07:50 no-preload-290122 kubelet[1367]: E0528 22:07:50.848460    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:08:01 no-preload-290122 kubelet[1367]: E0528 22:08:01.849345    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:08:12 no-preload-290122 kubelet[1367]: E0528 22:08:12.848497    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:08:24 no-preload-290122 kubelet[1367]: E0528 22:08:24.866061    1367 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:08:24 no-preload-290122 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:08:24 no-preload-290122 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:08:24 no-preload-290122 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:08:24 no-preload-290122 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:08:26 no-preload-290122 kubelet[1367]: E0528 22:08:26.849531    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:08:38 no-preload-290122 kubelet[1367]: E0528 22:08:38.848712    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:08:51 no-preload-290122 kubelet[1367]: E0528 22:08:51.848561    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:09:02 no-preload-290122 kubelet[1367]: E0528 22:09:02.848042    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:09:13 no-preload-290122 kubelet[1367]: E0528 22:09:13.848180    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	May 28 22:09:24 no-preload-290122 kubelet[1367]: E0528 22:09:24.867045    1367 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:09:24 no-preload-290122 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:09:24 no-preload-290122 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:09:24 no-preload-290122 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:09:24 no-preload-290122 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:09:26 no-preload-290122 kubelet[1367]: E0528 22:09:26.849534    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j2khc" podUID="2254e89c-3a61-4523-99a2-27ec92e73c9a"
	
	
	==> storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] <==
	I0528 21:51:00.173200       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 21:51:00.186474       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 21:51:00.187166       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 21:51:17.591804       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 21:51:17.592091       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-290122_079d0c91-a672-4362-a8a6-bea900690c58!
	I0528 21:51:17.592707       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4314861e-97db-4897-9ca7-3871b33d30d9", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-290122_079d0c91-a672-4362-a8a6-bea900690c58 became leader
	I0528 21:51:17.700227       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-290122_079d0c91-a672-4362-a8a6-bea900690c58!
	
	
	==> storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] <==
	I0528 21:50:29.309838       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0528 21:50:59.313671       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-290122 -n no-preload-290122
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-290122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-j2khc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-290122 describe pod metrics-server-569cc877fc-j2khc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-290122 describe pod metrics-server-569cc877fc-j2khc: exit status 1 (83.221679ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-j2khc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-290122 describe pod metrics-server-569cc877fc-j2khc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (335.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-249165 -n default-k8s-diff-port-249165
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-05-28 22:13:00.81078308 +0000 UTC m=+6711.566861691
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-249165 -n default-k8s-diff-port-249165
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-249165 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-249165 logs -n 25: (1.153293162s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-499466             | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-257793                              | cert-expiration-257793       | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807140 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	|         | disable-driver-mounts-807140                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:50 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-249165  | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC | 28 May 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC |                     |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-249165       | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC | 28 May 24 22:04 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 22:08 UTC | 28 May 24 22:08 UTC |
	| start   | -p newest-cni-588598 --memory=2200 --alsologtostderr   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:08 UTC | 28 May 24 22:09 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 22:09 UTC | 28 May 24 22:09 UTC |
	| addons  | enable metrics-server -p newest-cni-588598             | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:09 UTC | 28 May 24 22:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-588598                                   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:09 UTC | 28 May 24 22:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 22:09 UTC | 28 May 24 22:09 UTC |
	| addons  | enable dashboard -p newest-cni-588598                  | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-588598 --memory=2200 --alsologtostderr   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-588598 image list                           | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-588598                                   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-588598                                   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-588598                                   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	| delete  | -p newest-cni-588598                                   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 22:10:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 22:10:03.487472   78166 out.go:291] Setting OutFile to fd 1 ...
	I0528 22:10:03.487717   78166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:10:03.487726   78166 out.go:304] Setting ErrFile to fd 2...
	I0528 22:10:03.487730   78166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:10:03.487900   78166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 22:10:03.488383   78166 out.go:298] Setting JSON to false
	I0528 22:10:03.489199   78166 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6746,"bootTime":1716927457,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 22:10:03.489254   78166 start.go:139] virtualization: kvm guest
	I0528 22:10:03.491506   78166 out.go:177] * [newest-cni-588598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 22:10:03.492798   78166 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 22:10:03.492799   78166 notify.go:220] Checking for updates...
	I0528 22:10:03.494011   78166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 22:10:03.495913   78166 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 22:10:03.497297   78166 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 22:10:03.498518   78166 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 22:10:03.499871   78166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 22:10:03.501242   78166 config.go:182] Loaded profile config "newest-cni-588598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:10:03.501626   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:03.501690   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:03.516147   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0528 22:10:03.516483   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:03.516961   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:03.516982   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:03.517285   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:03.517476   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:03.517742   78166 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 22:10:03.518083   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:03.518118   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:03.532156   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0528 22:10:03.532488   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:03.532895   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:03.532913   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:03.533318   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:03.533545   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:03.567889   78166 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 22:10:03.569220   78166 start.go:297] selected driver: kvm2
	I0528 22:10:03.569233   78166 start.go:901] validating driver "kvm2" against &{Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:10:03.569340   78166 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 22:10:03.570282   78166 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:10:03.570362   78166 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 22:10:03.584694   78166 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 22:10:03.585222   78166 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0528 22:10:03.585296   78166 cni.go:84] Creating CNI manager for ""
	I0528 22:10:03.585313   78166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 22:10:03.585368   78166 start.go:340] cluster config:
	{Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-588598 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:10:03.585538   78166 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:10:03.587612   78166 out.go:177] * Starting "newest-cni-588598" primary control-plane node in "newest-cni-588598" cluster
	I0528 22:10:03.588794   78166 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 22:10:03.588824   78166 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 22:10:03.588834   78166 cache.go:56] Caching tarball of preloaded images
	I0528 22:10:03.588900   78166 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 22:10:03.588910   78166 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 22:10:03.589003   78166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/config.json ...
	I0528 22:10:03.589179   78166 start.go:360] acquireMachinesLock for newest-cni-588598: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 22:10:03.589237   78166 start.go:364] duration metric: took 30.18µs to acquireMachinesLock for "newest-cni-588598"
	I0528 22:10:03.589256   78166 start.go:96] Skipping create...Using existing machine configuration
	I0528 22:10:03.589266   78166 fix.go:54] fixHost starting: 
	I0528 22:10:03.589606   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:03.589639   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:03.603301   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44899
	I0528 22:10:03.603685   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:03.604116   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:03.604144   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:03.604536   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:03.604742   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:03.604891   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:10:03.606735   78166 fix.go:112] recreateIfNeeded on newest-cni-588598: state=Stopped err=<nil>
	I0528 22:10:03.606756   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	W0528 22:10:03.606901   78166 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 22:10:03.608698   78166 out.go:177] * Restarting existing kvm2 VM for "newest-cni-588598" ...
	I0528 22:10:03.609810   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Start
	I0528 22:10:03.609957   78166 main.go:141] libmachine: (newest-cni-588598) Ensuring networks are active...
	I0528 22:10:03.610705   78166 main.go:141] libmachine: (newest-cni-588598) Ensuring network default is active
	I0528 22:10:03.611013   78166 main.go:141] libmachine: (newest-cni-588598) Ensuring network mk-newest-cni-588598 is active
	I0528 22:10:03.611420   78166 main.go:141] libmachine: (newest-cni-588598) Getting domain xml...
	I0528 22:10:03.612186   78166 main.go:141] libmachine: (newest-cni-588598) Creating domain...
	I0528 22:10:04.803094   78166 main.go:141] libmachine: (newest-cni-588598) Waiting to get IP...
	I0528 22:10:04.803873   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:04.804234   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:04.804313   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:04.804207   78201 retry.go:31] will retry after 257.984747ms: waiting for machine to come up
	I0528 22:10:05.063999   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:05.064497   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:05.064525   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:05.064456   78201 retry.go:31] will retry after 246.19476ms: waiting for machine to come up
	I0528 22:10:05.311911   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:05.312392   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:05.312416   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:05.312340   78201 retry.go:31] will retry after 335.114844ms: waiting for machine to come up
	I0528 22:10:05.648649   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:05.649131   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:05.649161   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:05.649082   78201 retry.go:31] will retry after 440.66407ms: waiting for machine to come up
	I0528 22:10:06.091690   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:06.092113   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:06.092143   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:06.092079   78201 retry.go:31] will retry after 596.385085ms: waiting for machine to come up
	I0528 22:10:06.689941   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:06.690445   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:06.690478   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:06.690405   78201 retry.go:31] will retry after 690.571827ms: waiting for machine to come up
	I0528 22:10:07.382296   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:07.382706   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:07.382731   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:07.382645   78201 retry.go:31] will retry after 886.933473ms: waiting for machine to come up
	I0528 22:10:08.270613   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:08.270993   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:08.271022   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:08.270962   78201 retry.go:31] will retry after 917.957007ms: waiting for machine to come up
	I0528 22:10:09.190755   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:09.191249   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:09.191278   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:09.191194   78201 retry.go:31] will retry after 1.636471321s: waiting for machine to come up
	I0528 22:10:10.829472   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:10.829998   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:10.830024   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:10.829930   78201 retry.go:31] will retry after 1.594778354s: waiting for machine to come up
	I0528 22:10:12.426743   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:12.427199   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:12.427230   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:12.427132   78201 retry.go:31] will retry after 2.561893178s: waiting for machine to come up
	I0528 22:10:14.990660   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:14.991079   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:14.991107   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:14.991029   78201 retry.go:31] will retry after 2.20210997s: waiting for machine to come up
	I0528 22:10:17.196545   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:17.196881   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:17.196913   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:17.196844   78201 retry.go:31] will retry after 3.778097083s: waiting for machine to come up
	I0528 22:10:20.977593   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:20.978352   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has current primary IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:20.978382   78166 main.go:141] libmachine: (newest-cni-588598) Found IP for machine: 192.168.39.57
	I0528 22:10:20.978394   78166 main.go:141] libmachine: (newest-cni-588598) Reserving static IP address...
	I0528 22:10:20.978804   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "newest-cni-588598", mac: "52:54:00:a4:df:c4", ip: "192.168.39.57"} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:20.978828   78166 main.go:141] libmachine: (newest-cni-588598) Reserved static IP address: 192.168.39.57
	I0528 22:10:20.978840   78166 main.go:141] libmachine: (newest-cni-588598) DBG | skip adding static IP to network mk-newest-cni-588598 - found existing host DHCP lease matching {name: "newest-cni-588598", mac: "52:54:00:a4:df:c4", ip: "192.168.39.57"}
	I0528 22:10:20.978854   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Getting to WaitForSSH function...
	I0528 22:10:20.978864   78166 main.go:141] libmachine: (newest-cni-588598) Waiting for SSH to be available...
	I0528 22:10:20.980785   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:20.981077   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:20.981113   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:20.981274   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Using SSH client type: external
	I0528 22:10:20.981301   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa (-rw-------)
	I0528 22:10:20.981332   78166 main.go:141] libmachine: (newest-cni-588598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 22:10:20.981351   78166 main.go:141] libmachine: (newest-cni-588598) DBG | About to run SSH command:
	I0528 22:10:20.981363   78166 main.go:141] libmachine: (newest-cni-588598) DBG | exit 0
	I0528 22:10:21.101483   78166 main.go:141] libmachine: (newest-cni-588598) DBG | SSH cmd err, output: <nil>: 
	I0528 22:10:21.101898   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetConfigRaw
	I0528 22:10:21.102438   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:10:21.104898   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.105261   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.105295   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.105499   78166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/config.json ...
	I0528 22:10:21.105674   78166 machine.go:94] provisionDockerMachine start ...
	I0528 22:10:21.105691   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:21.105926   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:21.107989   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.108270   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.108289   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.108397   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:21.108557   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.108712   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.108837   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:21.108994   78166 main.go:141] libmachine: Using SSH client type: native
	I0528 22:10:21.109230   78166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:10:21.109246   78166 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 22:10:21.210053   78166 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 22:10:21.210092   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:10:21.210344   78166 buildroot.go:166] provisioning hostname "newest-cni-588598"
	I0528 22:10:21.210366   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:10:21.210559   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:21.213067   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.213381   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.213412   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.213491   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:21.213648   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.213804   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.213963   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:21.214112   78166 main.go:141] libmachine: Using SSH client type: native
	I0528 22:10:21.214271   78166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:10:21.214282   78166 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-588598 && echo "newest-cni-588598" | sudo tee /etc/hostname
	I0528 22:10:21.334983   78166 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-588598
	
	I0528 22:10:21.335018   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:21.337716   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.338073   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.338112   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.338238   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:21.338435   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.338607   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.338736   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:21.338884   78166 main.go:141] libmachine: Using SSH client type: native
	I0528 22:10:21.339078   78166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:10:21.339102   78166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-588598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-588598/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-588598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 22:10:21.446582   78166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 22:10:21.446608   78166 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 22:10:21.446629   78166 buildroot.go:174] setting up certificates
	I0528 22:10:21.446640   78166 provision.go:84] configureAuth start
	I0528 22:10:21.446651   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:10:21.446906   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:10:21.449345   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.449708   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.449740   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.449912   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:21.451869   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.452097   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.452116   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.452264   78166 provision.go:143] copyHostCerts
	I0528 22:10:21.452336   78166 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 22:10:21.452355   78166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 22:10:21.452422   78166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 22:10:21.452506   78166 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 22:10:21.452514   78166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 22:10:21.452538   78166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 22:10:21.452586   78166 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 22:10:21.452593   78166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 22:10:21.452612   78166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 22:10:21.452660   78166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.newest-cni-588598 san=[127.0.0.1 192.168.39.57 localhost minikube newest-cni-588598]
	I0528 22:10:21.689350   78166 provision.go:177] copyRemoteCerts
	I0528 22:10:21.689399   78166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 22:10:21.689425   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:21.692062   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.692596   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.692627   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.692877   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:21.693071   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.693226   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:21.693398   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:21.776437   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 22:10:21.804184   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0528 22:10:21.831299   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 22:10:21.858000   78166 provision.go:87] duration metric: took 411.350402ms to configureAuth
	I0528 22:10:21.858022   78166 buildroot.go:189] setting minikube options for container-runtime
	I0528 22:10:21.858216   78166 config.go:182] Loaded profile config "newest-cni-588598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:10:21.858310   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:21.860992   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.861399   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.861418   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.861716   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:21.861930   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.862076   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.862194   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:21.862377   78166 main.go:141] libmachine: Using SSH client type: native
	I0528 22:10:21.862595   78166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:10:21.862617   78166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 22:10:22.133213   78166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 22:10:22.133239   78166 machine.go:97] duration metric: took 1.027552944s to provisionDockerMachine
	I0528 22:10:22.133249   78166 start.go:293] postStartSetup for "newest-cni-588598" (driver="kvm2")
	I0528 22:10:22.133273   78166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 22:10:22.133288   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:22.133619   78166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 22:10:22.133666   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:22.136533   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.136905   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:22.136943   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.137186   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:22.137415   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:22.137603   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:22.137743   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:22.216515   78166 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 22:10:22.220904   78166 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 22:10:22.220939   78166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 22:10:22.221009   78166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 22:10:22.221098   78166 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 22:10:22.221207   78166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 22:10:22.230605   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 22:10:22.256224   78166 start.go:296] duration metric: took 122.964127ms for postStartSetup
	I0528 22:10:22.256262   78166 fix.go:56] duration metric: took 18.666995938s for fixHost
	I0528 22:10:22.256300   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:22.259322   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.259694   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:22.259724   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.259884   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:22.260085   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:22.260257   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:22.260408   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:22.260601   78166 main.go:141] libmachine: Using SSH client type: native
	I0528 22:10:22.260758   78166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:10:22.260767   78166 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 22:10:22.366264   78166 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716934222.338737256
	
	I0528 22:10:22.366281   78166 fix.go:216] guest clock: 1716934222.338737256
	I0528 22:10:22.366287   78166 fix.go:229] Guest: 2024-05-28 22:10:22.338737256 +0000 UTC Remote: 2024-05-28 22:10:22.256266989 +0000 UTC m=+18.801025807 (delta=82.470267ms)
	I0528 22:10:22.366329   78166 fix.go:200] guest clock delta is within tolerance: 82.470267ms
	I0528 22:10:22.366336   78166 start.go:83] releasing machines lock for "newest-cni-588598", held for 18.777087397s
	I0528 22:10:22.366355   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:22.366636   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:10:22.369373   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.369680   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:22.369707   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.369827   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:22.370296   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:22.370462   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:22.370573   78166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 22:10:22.370619   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:22.370718   78166 ssh_runner.go:195] Run: cat /version.json
	I0528 22:10:22.370743   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:22.373212   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.373518   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:22.373544   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.373576   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.373715   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:22.373896   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:22.374075   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:22.374076   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:22.374124   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.374196   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:22.374331   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:22.374367   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:22.374466   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:22.374594   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:22.472175   78166 ssh_runner.go:195] Run: systemctl --version
	I0528 22:10:22.478366   78166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 22:10:22.629498   78166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 22:10:22.636015   78166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 22:10:22.636090   78166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 22:10:22.652653   78166 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 22:10:22.652672   78166 start.go:494] detecting cgroup driver to use...
	I0528 22:10:22.652718   78166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 22:10:22.671583   78166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 22:10:22.687134   78166 docker.go:217] disabling cri-docker service (if available) ...
	I0528 22:10:22.687216   78166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 22:10:22.701618   78166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 22:10:22.714931   78166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 22:10:22.829917   78166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 22:10:22.984305   78166 docker.go:233] disabling docker service ...
	I0528 22:10:22.984408   78166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 22:10:22.998601   78166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 22:10:23.011502   78166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 22:10:23.146935   78166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 22:10:23.254677   78166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 22:10:23.268481   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 22:10:23.286930   78166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 22:10:23.287000   78166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.296967   78166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 22:10:23.297023   78166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.307277   78166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.317449   78166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.327620   78166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 22:10:23.337927   78166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.347809   78166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.364698   78166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.374698   78166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 22:10:23.384139   78166 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 22:10:23.384199   78166 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 22:10:23.397676   78166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 22:10:23.407326   78166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:10:23.525666   78166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 22:10:23.666020   78166 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 22:10:23.666086   78166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 22:10:23.671601   78166 start.go:562] Will wait 60s for crictl version
	I0528 22:10:23.671681   78166 ssh_runner.go:195] Run: which crictl
	I0528 22:10:23.675592   78166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 22:10:23.720429   78166 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 22:10:23.720561   78166 ssh_runner.go:195] Run: crio --version
	I0528 22:10:23.747317   78166 ssh_runner.go:195] Run: crio --version
	I0528 22:10:23.775385   78166 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 22:10:23.776563   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:10:23.779052   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:23.779295   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:23.779330   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:23.779539   78166 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 22:10:23.783666   78166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:10:23.797649   78166 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0528 22:10:23.798769   78166 kubeadm.go:877] updating cluster {Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6
m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 22:10:23.798876   78166 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 22:10:23.798924   78166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 22:10:23.833487   78166 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0528 22:10:23.833568   78166 ssh_runner.go:195] Run: which lz4
	I0528 22:10:23.837384   78166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 22:10:23.841397   78166 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 22:10:23.841426   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0528 22:10:25.245870   78166 crio.go:462] duration metric: took 1.408510459s to copy over tarball
	I0528 22:10:25.245951   78166 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 22:10:27.437359   78166 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191373201s)
	I0528 22:10:27.437397   78166 crio.go:469] duration metric: took 2.191502921s to extract the tarball
	I0528 22:10:27.437406   78166 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 22:10:27.477666   78166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 22:10:27.519220   78166 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 22:10:27.519242   78166 cache_images.go:84] Images are preloaded, skipping loading
	I0528 22:10:27.519250   78166 kubeadm.go:928] updating node { 192.168.39.57 8443 v1.30.1 crio true true} ...
	I0528 22:10:27.519374   78166 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-588598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 22:10:27.519455   78166 ssh_runner.go:195] Run: crio config
	I0528 22:10:27.568276   78166 cni.go:84] Creating CNI manager for ""
	I0528 22:10:27.568299   78166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 22:10:27.568314   78166 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0528 22:10:27.568333   78166 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-588598 NodeName:newest-cni-588598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 22:10:27.568470   78166 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-588598"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 22:10:27.568540   78166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 22:10:27.578929   78166 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 22:10:27.578985   78166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 22:10:27.589501   78166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0528 22:10:27.608053   78166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 22:10:27.624686   78166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0528 22:10:27.642539   78166 ssh_runner.go:195] Run: grep 192.168.39.57	control-plane.minikube.internal$ /etc/hosts
	I0528 22:10:27.646439   78166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.57	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:10:27.659110   78166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:10:27.793923   78166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:10:27.812426   78166 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598 for IP: 192.168.39.57
	I0528 22:10:27.812454   78166 certs.go:194] generating shared ca certs ...
	I0528 22:10:27.812477   78166 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:10:27.812668   78166 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 22:10:27.812731   78166 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 22:10:27.812744   78166 certs.go:256] generating profile certs ...
	I0528 22:10:27.812872   78166 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/client.key
	I0528 22:10:27.812971   78166 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.key.3d9132ba
	I0528 22:10:27.813030   78166 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.key
	I0528 22:10:27.813195   78166 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 22:10:27.813245   78166 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 22:10:27.813263   78166 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 22:10:27.813295   78166 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 22:10:27.813325   78166 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 22:10:27.813354   78166 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 22:10:27.813424   78166 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 22:10:27.814983   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 22:10:27.844995   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 22:10:27.883085   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 22:10:27.920052   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 22:10:27.948786   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 22:10:27.975806   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 22:10:28.005583   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 22:10:28.030585   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 22:10:28.056770   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 22:10:28.082575   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 22:10:28.107581   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 22:10:28.132689   78166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 22:10:28.150296   78166 ssh_runner.go:195] Run: openssl version
	I0528 22:10:28.156546   78166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 22:10:28.167235   78166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 22:10:28.171747   78166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 22:10:28.171795   78166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 22:10:28.177719   78166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 22:10:28.188095   78166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 22:10:28.198282   78166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 22:10:28.202886   78166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 22:10:28.202935   78166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 22:10:28.208624   78166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 22:10:28.218860   78166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 22:10:28.229289   78166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:10:28.233855   78166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:10:28.233908   78166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:10:28.239707   78166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 22:10:28.250693   78166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 22:10:28.255585   78166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 22:10:28.262175   78166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 22:10:28.268550   78166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 22:10:28.275531   78166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 22:10:28.282766   78166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 22:10:28.289007   78166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 22:10:28.295343   78166 kubeadm.go:391] StartCluster: {Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:10:28.295482   78166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 22:10:28.295536   78166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 22:10:28.338152   78166 cri.go:89] found id: ""
	I0528 22:10:28.338229   78166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0528 22:10:28.349119   78166 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 22:10:28.349140   78166 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 22:10:28.349144   78166 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 22:10:28.349187   78166 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 22:10:28.359484   78166 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 22:10:28.360054   78166 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-588598" does not appear in /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 22:10:28.360288   78166 kubeconfig.go:62] /home/jenkins/minikube-integration/18966-3963/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-588598" cluster setting kubeconfig missing "newest-cni-588598" context setting]
	I0528 22:10:28.360702   78166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:10:28.361961   78166 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 22:10:28.371797   78166 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.57
	I0528 22:10:28.371823   78166 kubeadm.go:1154] stopping kube-system containers ...
	I0528 22:10:28.371832   78166 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0528 22:10:28.371876   78166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 22:10:28.412240   78166 cri.go:89] found id: ""
	I0528 22:10:28.412312   78166 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 22:10:28.429416   78166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 22:10:28.439549   78166 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 22:10:28.439577   78166 kubeadm.go:156] found existing configuration files:
	
	I0528 22:10:28.439625   78166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 22:10:28.448717   78166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 22:10:28.448776   78166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 22:10:28.458405   78166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 22:10:28.467602   78166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 22:10:28.467665   78166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 22:10:28.477711   78166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 22:10:28.486931   78166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 22:10:28.487016   78166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 22:10:28.497193   78166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 22:10:28.506637   78166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 22:10:28.506693   78166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 22:10:28.516299   78166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 22:10:28.526159   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 22:10:28.644338   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 22:10:29.980562   78166 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.336189588s)
	I0528 22:10:29.980590   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 22:10:30.192135   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 22:10:30.264552   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 22:10:30.356685   78166 api_server.go:52] waiting for apiserver process to appear ...
	I0528 22:10:30.356780   78166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:10:30.857038   78166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:10:31.357524   78166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:10:31.422626   78166 api_server.go:72] duration metric: took 1.065942329s to wait for apiserver process to appear ...
	I0528 22:10:31.422654   78166 api_server.go:88] waiting for apiserver healthz status ...
	I0528 22:10:31.422676   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:33.761313   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 22:10:33.761353   78166 api_server.go:103] status: https://192.168.39.57:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 22:10:33.761371   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:33.805328   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 22:10:33.805366   78166 api_server.go:103] status: https://192.168.39.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 22:10:33.923552   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:33.933064   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 22:10:33.933088   78166 api_server.go:103] status: https://192.168.39.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 22:10:34.423714   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:34.445971   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 22:10:34.445997   78166 api_server.go:103] status: https://192.168.39.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 22:10:34.923401   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:34.938319   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 22:10:34.938353   78166 api_server.go:103] status: https://192.168.39.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 22:10:35.422865   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:35.427013   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I0528 22:10:35.434109   78166 api_server.go:141] control plane version: v1.30.1
	I0528 22:10:35.434131   78166 api_server.go:131] duration metric: took 4.011469454s to wait for apiserver health ...
	I0528 22:10:35.434139   78166 cni.go:84] Creating CNI manager for ""
	I0528 22:10:35.434144   78166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 22:10:35.436088   78166 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 22:10:35.437273   78166 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 22:10:35.456009   78166 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 22:10:35.487261   78166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 22:10:35.497647   78166 system_pods.go:59] 8 kube-system pods found
	I0528 22:10:35.497693   78166 system_pods.go:61] "coredns-7db6d8ff4d-wk5f4" [9dcd7b17-fc19-4468-b8f9-76a2fb7f1ec9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:10:35.497706   78166 system_pods.go:61] "etcd-newest-cni-588598" [785dbf00-a5a6-4946-8a36-6200a875dbcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 22:10:35.497721   78166 system_pods.go:61] "kube-apiserver-newest-cni-588598" [c9b79154-b6b7-494e-92b1-c447580db787] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 22:10:35.497731   78166 system_pods.go:61] "kube-controller-manager-newest-cni-588598" [f14bfaa9-0a88-4c01-9065-765797138f5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 22:10:35.497741   78166 system_pods.go:61] "kube-proxy-8jgfw" [8125c94f-11df-4eee-8612-9546dc054146] Running
	I0528 22:10:35.497749   78166 system_pods.go:61] "kube-scheduler-newest-cni-588598" [3e3160b5-e111-4a5e-9082-c9ae2a6633c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 22:10:35.497772   78166 system_pods.go:61] "metrics-server-569cc877fc-zhskl" [af95aae0-a143-4c72-a193-3a097270666a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 22:10:35.497783   78166 system_pods.go:61] "storage-provisioner" [9993a26e-0e7d-45d6-ac6f-3672e3390ba5] Running
	I0528 22:10:35.497791   78166 system_pods.go:74] duration metric: took 10.504284ms to wait for pod list to return data ...
	I0528 22:10:35.497799   78166 node_conditions.go:102] verifying NodePressure condition ...
	I0528 22:10:35.500864   78166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 22:10:35.500896   78166 node_conditions.go:123] node cpu capacity is 2
	I0528 22:10:35.500905   78166 node_conditions.go:105] duration metric: took 3.100481ms to run NodePressure ...
	I0528 22:10:35.500920   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 22:10:35.765589   78166 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 22:10:35.777929   78166 ops.go:34] apiserver oom_adj: -16
	I0528 22:10:35.777955   78166 kubeadm.go:591] duration metric: took 7.428804577s to restartPrimaryControlPlane
	I0528 22:10:35.777967   78166 kubeadm.go:393] duration metric: took 7.48263173s to StartCluster
	I0528 22:10:35.777988   78166 settings.go:142] acquiring lock: {Name:mkc1182212d780ab38539839372c25eb989b2e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:10:35.778104   78166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 22:10:35.779254   78166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:10:35.779554   78166 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 22:10:35.781326   78166 out.go:177] * Verifying Kubernetes components...
	I0528 22:10:35.779655   78166 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 22:10:35.779731   78166 config.go:182] Loaded profile config "newest-cni-588598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:10:35.782715   78166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:10:35.782725   78166 addons.go:69] Setting default-storageclass=true in profile "newest-cni-588598"
	I0528 22:10:35.782730   78166 addons.go:69] Setting metrics-server=true in profile "newest-cni-588598"
	I0528 22:10:35.782753   78166 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-588598"
	I0528 22:10:35.782718   78166 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-588598"
	I0528 22:10:35.782822   78166 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-588598"
	W0528 22:10:35.782834   78166 addons.go:243] addon storage-provisioner should already be in state true
	I0528 22:10:35.782867   78166 host.go:66] Checking if "newest-cni-588598" exists ...
	I0528 22:10:35.782714   78166 addons.go:69] Setting dashboard=true in profile "newest-cni-588598"
	I0528 22:10:35.782966   78166 addons.go:234] Setting addon dashboard=true in "newest-cni-588598"
	W0528 22:10:35.782979   78166 addons.go:243] addon dashboard should already be in state true
	I0528 22:10:35.782755   78166 addons.go:234] Setting addon metrics-server=true in "newest-cni-588598"
	I0528 22:10:35.783012   78166 host.go:66] Checking if "newest-cni-588598" exists ...
	W0528 22:10:35.783027   78166 addons.go:243] addon metrics-server should already be in state true
	I0528 22:10:35.783066   78166 host.go:66] Checking if "newest-cni-588598" exists ...
	I0528 22:10:35.783180   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.783225   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.783249   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.783274   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.783375   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.783401   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.783500   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.783543   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.800652   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44733
	I0528 22:10:35.801119   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.801775   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.801806   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.802183   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.802780   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.802829   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.802898   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0528 22:10:35.803058   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43811
	I0528 22:10:35.803403   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.803481   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0528 22:10:35.803502   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.803811   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.803900   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.803925   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.804190   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.804208   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.804338   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.804354   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.804415   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.804530   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.804663   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.804717   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:10:35.804954   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.804991   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.805160   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.805183   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.807367   78166 addons.go:234] Setting addon default-storageclass=true in "newest-cni-588598"
	W0528 22:10:35.807385   78166 addons.go:243] addon default-storageclass should already be in state true
	I0528 22:10:35.807412   78166 host.go:66] Checking if "newest-cni-588598" exists ...
	I0528 22:10:35.807632   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.807658   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.823082   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I0528 22:10:35.823600   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.824034   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40657
	I0528 22:10:35.824163   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36739
	I0528 22:10:35.824444   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.824457   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.824520   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.824583   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.824980   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.825000   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.825044   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.825154   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.825169   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.825522   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.825696   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:10:35.825826   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.825995   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:10:35.827698   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:35.827890   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:35.827969   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43809
	I0528 22:10:35.827998   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.828023   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.829890   78166 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:10:35.828537   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.831362   78166 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0528 22:10:35.831367   78166 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:10:35.831384   78166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 22:10:35.831407   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:35.832763   78166 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0528 22:10:35.831943   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.832782   78166 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0528 22:10:35.832802   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:35.832819   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.833207   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.833359   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:10:35.835351   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.835892   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:35.835931   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:35.835970   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.836130   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:35.836291   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:35.836354   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.836379   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:35.838121   78166 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0528 22:10:35.836741   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:35.836763   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:35.837036   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:35.839385   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.840604   78166 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0528 22:10:35.839618   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:35.841806   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0528 22:10:35.841824   78166 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0528 22:10:35.841840   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:35.841995   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:35.842166   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:35.844621   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:35.844645   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.844671   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:35.844694   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.844783   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:35.844947   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:35.845102   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:35.847124   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39001
	I0528 22:10:35.847461   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.847996   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.848028   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.848412   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.848594   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:10:35.849784   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:35.850006   78166 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 22:10:35.850020   78166 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 22:10:35.850036   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:35.852949   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.853321   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:35.853343   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.853564   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:35.853728   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:35.853961   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:35.854070   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:35.986115   78166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:10:36.006920   78166 api_server.go:52] waiting for apiserver process to appear ...
	I0528 22:10:36.007010   78166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:10:36.022543   78166 api_server.go:72] duration metric: took 242.943759ms to wait for apiserver process to appear ...
	I0528 22:10:36.022568   78166 api_server.go:88] waiting for apiserver healthz status ...
	I0528 22:10:36.022584   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:36.028270   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I0528 22:10:36.029416   78166 api_server.go:141] control plane version: v1.30.1
	I0528 22:10:36.029437   78166 api_server.go:131] duration metric: took 6.863133ms to wait for apiserver health ...
	I0528 22:10:36.029444   78166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 22:10:36.034807   78166 system_pods.go:59] 8 kube-system pods found
	I0528 22:10:36.034833   78166 system_pods.go:61] "coredns-7db6d8ff4d-wk5f4" [9dcd7b17-fc19-4468-b8f9-76a2fb7f1ec9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:10:36.034842   78166 system_pods.go:61] "etcd-newest-cni-588598" [785dbf00-a5a6-4946-8a36-6200a875dbcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 22:10:36.034851   78166 system_pods.go:61] "kube-apiserver-newest-cni-588598" [c9b79154-b6b7-494e-92b1-c447580db787] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 22:10:36.034857   78166 system_pods.go:61] "kube-controller-manager-newest-cni-588598" [f14bfaa9-0a88-4c01-9065-765797138f5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 22:10:36.034862   78166 system_pods.go:61] "kube-proxy-8jgfw" [8125c94f-11df-4eee-8612-9546dc054146] Running
	I0528 22:10:36.034867   78166 system_pods.go:61] "kube-scheduler-newest-cni-588598" [3e3160b5-e111-4a5e-9082-c9ae2a6633c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 22:10:36.034873   78166 system_pods.go:61] "metrics-server-569cc877fc-zhskl" [af95aae0-a143-4c72-a193-3a097270666a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 22:10:36.034877   78166 system_pods.go:61] "storage-provisioner" [9993a26e-0e7d-45d6-ac6f-3672e3390ba5] Running
	I0528 22:10:36.034882   78166 system_pods.go:74] duration metric: took 5.433272ms to wait for pod list to return data ...
	I0528 22:10:36.034891   78166 default_sa.go:34] waiting for default service account to be created ...
	I0528 22:10:36.037186   78166 default_sa.go:45] found service account: "default"
	I0528 22:10:36.037208   78166 default_sa.go:55] duration metric: took 2.311977ms for default service account to be created ...
	I0528 22:10:36.037217   78166 kubeadm.go:576] duration metric: took 257.62574ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0528 22:10:36.037231   78166 node_conditions.go:102] verifying NodePressure condition ...
	I0528 22:10:36.039286   78166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 22:10:36.039302   78166 node_conditions.go:123] node cpu capacity is 2
	I0528 22:10:36.039309   78166 node_conditions.go:105] duration metric: took 2.074024ms to run NodePressure ...
	I0528 22:10:36.039319   78166 start.go:240] waiting for startup goroutines ...
	I0528 22:10:36.064588   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0528 22:10:36.064618   78166 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0528 22:10:36.071458   78166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:10:36.091840   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0528 22:10:36.091875   78166 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0528 22:10:36.122954   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0528 22:10:36.122986   78166 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0528 22:10:36.151455   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0528 22:10:36.151475   78166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0528 22:10:36.167610   78166 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0528 22:10:36.167628   78166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0528 22:10:36.183705   78166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:10:36.198394   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0528 22:10:36.198426   78166 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0528 22:10:36.212483   78166 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0528 22:10:36.212502   78166 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0528 22:10:36.249558   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0528 22:10:36.249593   78166 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0528 22:10:36.251340   78166 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:10:36.251359   78166 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0528 22:10:36.302541   78166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:10:36.316104   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0528 22:10:36.316128   78166 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0528 22:10:36.343632   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0528 22:10:36.343657   78166 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0528 22:10:36.368360   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:10:36.368383   78166 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0528 22:10:36.430667   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:36.430705   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:36.431035   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Closing plugin on server side
	I0528 22:10:36.431095   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:36.431113   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:36.431126   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:36.431140   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:36.431470   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Closing plugin on server side
	I0528 22:10:36.431495   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:36.431507   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:36.437934   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:36.437955   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:36.438185   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:36.438221   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:36.492982   78166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:10:37.677004   78166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.493257075s)
	I0528 22:10:37.677057   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:37.677069   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:37.677356   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:37.677376   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:37.677392   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:37.677482   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:37.677700   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:37.677723   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:37.782133   78166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.479541726s)
	I0528 22:10:37.782201   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:37.782217   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:37.782560   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Closing plugin on server side
	I0528 22:10:37.782567   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:37.782580   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:37.782590   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:37.782598   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:37.782881   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Closing plugin on server side
	I0528 22:10:37.782895   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:37.782906   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:37.782919   78166 addons.go:475] Verifying addon metrics-server=true in "newest-cni-588598"
	I0528 22:10:37.847114   78166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.354088519s)
	I0528 22:10:37.847172   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:37.847186   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:37.847485   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:37.847503   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:37.847513   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:37.847521   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:37.847758   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:37.847774   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:37.849515   78166 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-588598 addons enable metrics-server
	
	I0528 22:10:37.850980   78166 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0528 22:10:37.852515   78166 addons.go:510] duration metric: took 2.072864323s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0528 22:10:37.852553   78166 start.go:245] waiting for cluster config update ...
	I0528 22:10:37.852568   78166 start.go:254] writing updated cluster config ...
	I0528 22:10:37.852808   78166 ssh_runner.go:195] Run: rm -f paused
	I0528 22:10:37.900425   78166 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 22:10:37.902424   78166 out.go:177] * Done! kubectl is now configured to use "newest-cni-588598" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.405616650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934381405596498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3b5b5e5-4b90-4916-8f4c-4dbf42416005 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.406122652Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a509c9f7-ec7b-42a5-993a-105c203e322e name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.406187253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a509c9f7-ec7b-42a5-993a-105c203e322e name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.406364206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c548f7544cbb6646beaf3e60e066fd007e36751aa6d64dc3525d2e7c313115c,PodSandboxId:2648aa7b5be82109ec33dc22d721afb5182f4314fd51e2de905ec4553b75fbdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839153737016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9v4qf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970de16b-4ade-4d82-8f78-fc83fc86fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 3e26d238,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0faefa4b1c4c329709900d137e560de3390c8153d2f934dcafed855199f336f0,PodSandboxId:8ebf0bd9db29cba925c9024a33413319840d4fa4c917e999210ed3cced56e604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839100165633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m7n7k,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: caf303ad-139a-4b42-820e-617fa654399c,},Annotations:map[string]string{io.kubernetes.container.hash: ea30a637,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec711aaa85924e7057421993cf75a2bf3b977afbbed007f940052128eb02e89,PodSandboxId:93dc7b05268240c895dc9b9c7de85b9349208e60b66b8292d6cf49c06966da6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1716933838493672775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3fd3ac-6795-4168-bd94-007932dcbb2c,},Annotations:map[string]string{io.kubernetes.container.hash: 14e90e58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bab6489dd8e95e261046821e03129e1df20511a957172d131a801383b2782c,PodSandboxId:994ab08e76d0a16f9f656192c7305743082ae5274d38c64ceee31d1490c0ae70,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1716933837933311404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df64df09-8898-44db-919c-0b1d564538ee,},Annotations:map[string]string{io.kubernetes.container.hash: fb208a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4719768083408e8d40a1b3e1767d0c299d50fc88c3d66aac9b6fd5d458ee7ac,PodSandboxId:de74adb6bb2e42045eddd14aab0a6da13119970fcec0e361690ae712e702f5f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933818509286762,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39715514d16e1aef2c805f45c43e942c,},Annotations:map[string]string{io.kubernetes.container.hash: 55d06a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30fed8617be7400eab4846e41b293ade0c1ada903f8c3e21fdcd5052ca35b41d,PodSandboxId:5819f408569516337af99087fe96a2a11a1dec54cb0fccef7a2ecc34c8394c34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933818541679246,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb34c519fc34f94122ba139e98e7226a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7456087993ce43df9e33b9136199c345eb75a44dd4be3c9eae3d8378076c3cfb,PodSandboxId:eaf3652a3a78ac206674ff795df24a67155bcb3220adf5b257f77b1588fd29dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933818451356924,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b574d3676ce396f415ec6bdfd52e3c,},Annotations:map[string]string{io.kubernetes.container.hash: c6aa01a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0d2ea508b9e928e0a175ee08f28acb732359d0cb411c0baaa7c36b7a9913bb,PodSandboxId:7c43d8161def62f299845abf9bc11d8c831b1ecb31982eb8a4dd37d9caeec00a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933818414528223,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b6925a1cfa430048d5fd4482f4cbc,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a509c9f7-ec7b-42a5-993a-105c203e322e name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.443775485Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=027de5b0-035c-4334-9ff6-f731a7824c50 name=/runtime.v1.RuntimeService/Version
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.443865747Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=027de5b0-035c-4334-9ff6-f731a7824c50 name=/runtime.v1.RuntimeService/Version
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.445429473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2fdceda5-db3f-419b-af4f-ff127eb91924 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.445852728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934381445831857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fdceda5-db3f-419b-af4f-ff127eb91924 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.446446771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49acf494-b124-4c24-8d03-c0058951e878 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.446564582Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49acf494-b124-4c24-8d03-c0058951e878 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.446758459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c548f7544cbb6646beaf3e60e066fd007e36751aa6d64dc3525d2e7c313115c,PodSandboxId:2648aa7b5be82109ec33dc22d721afb5182f4314fd51e2de905ec4553b75fbdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839153737016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9v4qf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970de16b-4ade-4d82-8f78-fc83fc86fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 3e26d238,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0faefa4b1c4c329709900d137e560de3390c8153d2f934dcafed855199f336f0,PodSandboxId:8ebf0bd9db29cba925c9024a33413319840d4fa4c917e999210ed3cced56e604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839100165633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m7n7k,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: caf303ad-139a-4b42-820e-617fa654399c,},Annotations:map[string]string{io.kubernetes.container.hash: ea30a637,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec711aaa85924e7057421993cf75a2bf3b977afbbed007f940052128eb02e89,PodSandboxId:93dc7b05268240c895dc9b9c7de85b9349208e60b66b8292d6cf49c06966da6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1716933838493672775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3fd3ac-6795-4168-bd94-007932dcbb2c,},Annotations:map[string]string{io.kubernetes.container.hash: 14e90e58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bab6489dd8e95e261046821e03129e1df20511a957172d131a801383b2782c,PodSandboxId:994ab08e76d0a16f9f656192c7305743082ae5274d38c64ceee31d1490c0ae70,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1716933837933311404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df64df09-8898-44db-919c-0b1d564538ee,},Annotations:map[string]string{io.kubernetes.container.hash: fb208a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4719768083408e8d40a1b3e1767d0c299d50fc88c3d66aac9b6fd5d458ee7ac,PodSandboxId:de74adb6bb2e42045eddd14aab0a6da13119970fcec0e361690ae712e702f5f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933818509286762,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39715514d16e1aef2c805f45c43e942c,},Annotations:map[string]string{io.kubernetes.container.hash: 55d06a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30fed8617be7400eab4846e41b293ade0c1ada903f8c3e21fdcd5052ca35b41d,PodSandboxId:5819f408569516337af99087fe96a2a11a1dec54cb0fccef7a2ecc34c8394c34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933818541679246,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb34c519fc34f94122ba139e98e7226a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7456087993ce43df9e33b9136199c345eb75a44dd4be3c9eae3d8378076c3cfb,PodSandboxId:eaf3652a3a78ac206674ff795df24a67155bcb3220adf5b257f77b1588fd29dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933818451356924,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b574d3676ce396f415ec6bdfd52e3c,},Annotations:map[string]string{io.kubernetes.container.hash: c6aa01a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0d2ea508b9e928e0a175ee08f28acb732359d0cb411c0baaa7c36b7a9913bb,PodSandboxId:7c43d8161def62f299845abf9bc11d8c831b1ecb31982eb8a4dd37d9caeec00a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933818414528223,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b6925a1cfa430048d5fd4482f4cbc,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49acf494-b124-4c24-8d03-c0058951e878 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.490227502Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20ac4dfc-2512-4a68-8d71-6e190d8099ce name=/runtime.v1.RuntimeService/Version
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.490314843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20ac4dfc-2512-4a68-8d71-6e190d8099ce name=/runtime.v1.RuntimeService/Version
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.491863412Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4febecf7-b4cf-4146-99cd-b2c4953c7e66 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.492376820Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934381492353417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4febecf7-b4cf-4146-99cd-b2c4953c7e66 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.493312590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b27ead58-32ac-43d9-ad73-d9d63e5ef483 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.493453229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b27ead58-32ac-43d9-ad73-d9d63e5ef483 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.493700962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c548f7544cbb6646beaf3e60e066fd007e36751aa6d64dc3525d2e7c313115c,PodSandboxId:2648aa7b5be82109ec33dc22d721afb5182f4314fd51e2de905ec4553b75fbdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839153737016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9v4qf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970de16b-4ade-4d82-8f78-fc83fc86fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 3e26d238,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0faefa4b1c4c329709900d137e560de3390c8153d2f934dcafed855199f336f0,PodSandboxId:8ebf0bd9db29cba925c9024a33413319840d4fa4c917e999210ed3cced56e604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839100165633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m7n7k,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: caf303ad-139a-4b42-820e-617fa654399c,},Annotations:map[string]string{io.kubernetes.container.hash: ea30a637,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec711aaa85924e7057421993cf75a2bf3b977afbbed007f940052128eb02e89,PodSandboxId:93dc7b05268240c895dc9b9c7de85b9349208e60b66b8292d6cf49c06966da6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1716933838493672775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3fd3ac-6795-4168-bd94-007932dcbb2c,},Annotations:map[string]string{io.kubernetes.container.hash: 14e90e58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bab6489dd8e95e261046821e03129e1df20511a957172d131a801383b2782c,PodSandboxId:994ab08e76d0a16f9f656192c7305743082ae5274d38c64ceee31d1490c0ae70,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1716933837933311404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df64df09-8898-44db-919c-0b1d564538ee,},Annotations:map[string]string{io.kubernetes.container.hash: fb208a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4719768083408e8d40a1b3e1767d0c299d50fc88c3d66aac9b6fd5d458ee7ac,PodSandboxId:de74adb6bb2e42045eddd14aab0a6da13119970fcec0e361690ae712e702f5f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933818509286762,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39715514d16e1aef2c805f45c43e942c,},Annotations:map[string]string{io.kubernetes.container.hash: 55d06a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30fed8617be7400eab4846e41b293ade0c1ada903f8c3e21fdcd5052ca35b41d,PodSandboxId:5819f408569516337af99087fe96a2a11a1dec54cb0fccef7a2ecc34c8394c34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933818541679246,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb34c519fc34f94122ba139e98e7226a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7456087993ce43df9e33b9136199c345eb75a44dd4be3c9eae3d8378076c3cfb,PodSandboxId:eaf3652a3a78ac206674ff795df24a67155bcb3220adf5b257f77b1588fd29dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933818451356924,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b574d3676ce396f415ec6bdfd52e3c,},Annotations:map[string]string{io.kubernetes.container.hash: c6aa01a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0d2ea508b9e928e0a175ee08f28acb732359d0cb411c0baaa7c36b7a9913bb,PodSandboxId:7c43d8161def62f299845abf9bc11d8c831b1ecb31982eb8a4dd37d9caeec00a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933818414528223,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b6925a1cfa430048d5fd4482f4cbc,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b27ead58-32ac-43d9-ad73-d9d63e5ef483 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.527091191Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b57370d3-b545-4036-a39f-61fecf25d0ba name=/runtime.v1.RuntimeService/Version
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.527179252Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b57370d3-b545-4036-a39f-61fecf25d0ba name=/runtime.v1.RuntimeService/Version
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.528105294Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6eb1c7e-5d63-4537-9eaf-043802cf396e name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.528663254Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934381528637740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6eb1c7e-5d63-4537-9eaf-043802cf396e name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.529216715Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0786be66-ee1f-47d3-8f2a-58410eca02af name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.529284022Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0786be66-ee1f-47d3-8f2a-58410eca02af name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:13:01 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:13:01.529533135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c548f7544cbb6646beaf3e60e066fd007e36751aa6d64dc3525d2e7c313115c,PodSandboxId:2648aa7b5be82109ec33dc22d721afb5182f4314fd51e2de905ec4553b75fbdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839153737016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9v4qf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970de16b-4ade-4d82-8f78-fc83fc86fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 3e26d238,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0faefa4b1c4c329709900d137e560de3390c8153d2f934dcafed855199f336f0,PodSandboxId:8ebf0bd9db29cba925c9024a33413319840d4fa4c917e999210ed3cced56e604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839100165633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m7n7k,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: caf303ad-139a-4b42-820e-617fa654399c,},Annotations:map[string]string{io.kubernetes.container.hash: ea30a637,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec711aaa85924e7057421993cf75a2bf3b977afbbed007f940052128eb02e89,PodSandboxId:93dc7b05268240c895dc9b9c7de85b9349208e60b66b8292d6cf49c06966da6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1716933838493672775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3fd3ac-6795-4168-bd94-007932dcbb2c,},Annotations:map[string]string{io.kubernetes.container.hash: 14e90e58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bab6489dd8e95e261046821e03129e1df20511a957172d131a801383b2782c,PodSandboxId:994ab08e76d0a16f9f656192c7305743082ae5274d38c64ceee31d1490c0ae70,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1716933837933311404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df64df09-8898-44db-919c-0b1d564538ee,},Annotations:map[string]string{io.kubernetes.container.hash: fb208a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4719768083408e8d40a1b3e1767d0c299d50fc88c3d66aac9b6fd5d458ee7ac,PodSandboxId:de74adb6bb2e42045eddd14aab0a6da13119970fcec0e361690ae712e702f5f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933818509286762,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39715514d16e1aef2c805f45c43e942c,},Annotations:map[string]string{io.kubernetes.container.hash: 55d06a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30fed8617be7400eab4846e41b293ade0c1ada903f8c3e21fdcd5052ca35b41d,PodSandboxId:5819f408569516337af99087fe96a2a11a1dec54cb0fccef7a2ecc34c8394c34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933818541679246,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb34c519fc34f94122ba139e98e7226a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7456087993ce43df9e33b9136199c345eb75a44dd4be3c9eae3d8378076c3cfb,PodSandboxId:eaf3652a3a78ac206674ff795df24a67155bcb3220adf5b257f77b1588fd29dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933818451356924,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b574d3676ce396f415ec6bdfd52e3c,},Annotations:map[string]string{io.kubernetes.container.hash: c6aa01a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0d2ea508b9e928e0a175ee08f28acb732359d0cb411c0baaa7c36b7a9913bb,PodSandboxId:7c43d8161def62f299845abf9bc11d8c831b1ecb31982eb8a4dd37d9caeec00a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933818414528223,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b6925a1cfa430048d5fd4482f4cbc,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0786be66-ee1f-47d3-8f2a-58410eca02af name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1c548f7544cbb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   2648aa7b5be82       coredns-7db6d8ff4d-9v4qf
	0faefa4b1c4c3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   8ebf0bd9db29c       coredns-7db6d8ff4d-m7n7k
	fec711aaa8592       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   93dc7b0526824       storage-provisioner
	c8bab6489dd8e       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   9 minutes ago       Running             kube-proxy                0                   994ab08e76d0a       kube-proxy-b2nd9
	30fed8617be74       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   9 minutes ago       Running             kube-scheduler            2                   5819f40856951       kube-scheduler-default-k8s-diff-port-249165
	b471976808340       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   de74adb6bb2e4       etcd-default-k8s-diff-port-249165
	7456087993ce4       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   9 minutes ago       Running             kube-apiserver            2                   eaf3652a3a78a       kube-apiserver-default-k8s-diff-port-249165
	aa0d2ea508b9e       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   9 minutes ago       Running             kube-controller-manager   2                   7c43d8161def6       kube-controller-manager-default-k8s-diff-port-249165
	
	
	==> coredns [0faefa4b1c4c329709900d137e560de3390c8153d2f934dcafed855199f336f0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [1c548f7544cbb6646beaf3e60e066fd007e36751aa6d64dc3525d2e7c313115c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-249165
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-249165
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=default-k8s-diff-port-249165
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T22_03_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 22:03:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-249165
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 22:12:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 22:09:10 +0000   Tue, 28 May 2024 22:03:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 22:09:10 +0000   Tue, 28 May 2024 22:03:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 22:09:10 +0000   Tue, 28 May 2024 22:03:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 22:09:10 +0000   Tue, 28 May 2024 22:03:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.48
	  Hostname:    default-k8s-diff-port-249165
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a1a15b852f4408f840f7cdc28c2cdd1
	  System UUID:                1a1a15b8-52f4-408f-840f-7cdc28c2cdd1
	  Boot ID:                    1525e0b5-a615-412d-8626-275908ae12e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-9v4qf                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m4s
	  kube-system                 coredns-7db6d8ff4d-m7n7k                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m4s
	  kube-system                 etcd-default-k8s-diff-port-249165                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-249165             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-249165    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-b2nd9                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	  kube-system                 kube-scheduler-default-k8s-diff-port-249165             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-569cc877fc-6q6pz                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m3s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node default-k8s-diff-port-249165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node default-k8s-diff-port-249165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node default-k8s-diff-port-249165 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m5s   node-controller  Node default-k8s-diff-port-249165 event: Registered Node default-k8s-diff-port-249165 in Controller
	
	
	==> dmesg <==
	[  +0.039684] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.631690] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.453709] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.624429] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.222216] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.060876] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052989] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.176096] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.131262] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.281323] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.309227] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.062021] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.058392] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +4.640518] kauditd_printk_skb: 97 callbacks suppressed
	[May28 21:59] kauditd_printk_skb: 50 callbacks suppressed
	[  +7.349144] kauditd_printk_skb: 27 callbacks suppressed
	[May28 22:03] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.329746] systemd-fstab-generator[3598]: Ignoring "noauto" option for root device
	[  +4.561055] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.504187] systemd-fstab-generator[3921]: Ignoring "noauto" option for root device
	[ +13.403247] systemd-fstab-generator[4130]: Ignoring "noauto" option for root device
	[  +0.116519] kauditd_printk_skb: 14 callbacks suppressed
	[May28 22:05] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [b4719768083408e8d40a1b3e1767d0c299d50fc88c3d66aac9b6fd5d458ee7ac] <==
	{"level":"info","ts":"2024-05-28T22:03:39.564621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36b30da979eae81e received MsgPreVoteResp from 36b30da979eae81e at term 1"}
	{"level":"info","ts":"2024-05-28T22:03:39.564635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36b30da979eae81e became candidate at term 2"}
	{"level":"info","ts":"2024-05-28T22:03:39.564647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36b30da979eae81e received MsgVoteResp from 36b30da979eae81e at term 2"}
	{"level":"info","ts":"2024-05-28T22:03:39.564655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36b30da979eae81e became leader at term 2"}
	{"level":"info","ts":"2024-05-28T22:03:39.564663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 36b30da979eae81e elected leader 36b30da979eae81e at term 2"}
	{"level":"info","ts":"2024-05-28T22:03:39.567601Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:03:39.569748Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"36b30da979eae81e","local-member-attributes":"{Name:default-k8s-diff-port-249165 ClientURLs:[https://192.168.72.48:2379]}","request-path":"/0/members/36b30da979eae81e/attributes","cluster-id":"a85db1df86d6d05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T22:03:39.571176Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a85db1df86d6d05","local-member-id":"36b30da979eae81e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:03:39.571304Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T22:03:39.571407Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T22:03:39.571436Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T22:03:39.571332Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:03:39.571526Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:03:39.571351Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T22:03:39.573293Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.48:2379"}
	{"level":"info","ts":"2024-05-28T22:03:39.57827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-05-28T22:09:30.742527Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.410561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T22:09:30.742708Z","caller":"traceutil/trace.go:171","msg":"trace[461816137] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:718; }","duration":"203.724242ms","start":"2024-05-28T22:09:30.538949Z","end":"2024-05-28T22:09:30.742673Z","steps":["trace[461816137] 'range keys from in-memory index tree'  (duration: 203.28865ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T22:10:29.591911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"619.512308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T22:10:29.592061Z","caller":"traceutil/trace.go:171","msg":"trace[1470547791] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:766; }","duration":"619.734222ms","start":"2024-05-28T22:10:28.972296Z","end":"2024-05-28T22:10:29.59203Z","steps":["trace[1470547791] 'range keys from in-memory index tree'  (duration: 619.397905ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T22:10:29.59211Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T22:10:28.972283Z","time spent":"619.813582ms","remote":"127.0.0.1:49126","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-05-28T22:10:29.592697Z","caller":"traceutil/trace.go:171","msg":"trace[214288093] transaction","detail":"{read_only:false; response_revision:767; number_of_response:1; }","duration":"143.963892ms","start":"2024-05-28T22:10:29.448712Z","end":"2024-05-28T22:10:29.592676Z","steps":["trace[214288093] 'process raft request'  (duration: 131.336203ms)","trace[214288093] 'compare'  (duration: 11.690437ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T22:10:30.152869Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"314.178366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T22:10:30.153728Z","caller":"traceutil/trace.go:171","msg":"trace[473617281] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:767; }","duration":"315.021154ms","start":"2024-05-28T22:10:29.838639Z","end":"2024-05-28T22:10:30.15366Z","steps":["trace[473617281] 'range keys from in-memory index tree'  (duration: 314.079243ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T22:10:30.153824Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T22:10:29.838618Z","time spent":"315.184881ms","remote":"127.0.0.1:49112","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	
	
	==> kernel <==
	 22:13:01 up 14 min,  0 users,  load average: 0.33, 0.24, 0.14
	Linux default-k8s-diff-port-249165 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7456087993ce43df9e33b9136199c345eb75a44dd4be3c9eae3d8378076c3cfb] <==
	I0528 22:06:59.161528       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:08:41.019236       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:08:41.019345       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0528 22:08:42.020035       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:08:42.020267       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:08:42.020318       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:08:42.020202       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:08:42.020511       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:08:42.021701       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:09:42.020885       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:09:42.021009       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:09:42.021043       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:09:42.022161       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:09:42.022241       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:09:42.022269       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:11:42.021348       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:11:42.021418       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:11:42.021428       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:11:42.022573       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:11:42.022689       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:11:42.022717       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [aa0d2ea508b9e928e0a175ee08f28acb732359d0cb411c0baaa7c36b7a9913bb] <==
	I0528 22:07:26.983365       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:07:56.532383       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:07:56.993710       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:08:26.538299       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:08:27.005411       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:08:56.544985       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:08:57.013631       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:09:26.551057       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:09:27.027662       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0528 22:09:43.677567       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="270.379µs"
	E0528 22:09:56.558389       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:09:56.676784       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="403.583µs"
	I0528 22:09:57.036501       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:10:26.563891       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:10:27.045225       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:10:56.568557       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:10:57.053322       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:11:26.572685       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:11:27.060983       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:11:56.577980       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:11:57.068316       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:12:26.582700       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:12:27.076780       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:12:56.588018       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:12:57.084701       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c8bab6489dd8e95e261046821e03129e1df20511a957172d131a801383b2782c] <==
	I0528 22:03:58.316691       1 server_linux.go:69] "Using iptables proxy"
	I0528 22:03:58.341274       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.48"]
	I0528 22:03:58.416759       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 22:03:58.416812       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 22:03:58.416853       1 server_linux.go:165] "Using iptables Proxier"
	I0528 22:03:58.422704       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 22:03:58.422941       1 server.go:872] "Version info" version="v1.30.1"
	I0528 22:03:58.422956       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 22:03:58.424384       1 config.go:192] "Starting service config controller"
	I0528 22:03:58.424399       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 22:03:58.424423       1 config.go:101] "Starting endpoint slice config controller"
	I0528 22:03:58.424426       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 22:03:58.425063       1 config.go:319] "Starting node config controller"
	I0528 22:03:58.425070       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 22:03:58.524785       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 22:03:58.524902       1 shared_informer.go:320] Caches are synced for service config
	I0528 22:03:58.526553       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [30fed8617be7400eab4846e41b293ade0c1ada903f8c3e21fdcd5052ca35b41d] <==
	W0528 22:03:41.061226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 22:03:41.061257       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0528 22:03:41.061331       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 22:03:41.061362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0528 22:03:41.061442       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 22:03:41.062544       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 22:03:41.061682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 22:03:41.062595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 22:03:41.061757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 22:03:41.062609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 22:03:41.963178       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 22:03:41.963562       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 22:03:41.975413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 22:03:41.975548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0528 22:03:42.019400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 22:03:42.019445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0528 22:03:42.023228       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 22:03:42.023561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0528 22:03:42.270972       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 22:03:42.271017       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 22:03:42.275553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 22:03:42.275743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0528 22:03:42.282173       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 22:03:42.282334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0528 22:03:44.630655       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 22:10:43 default-k8s-diff-port-249165 kubelet[3928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:10:43 default-k8s-diff-port-249165 kubelet[3928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:10:43 default-k8s-diff-port-249165 kubelet[3928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:10:43 default-k8s-diff-port-249165 kubelet[3928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:10:47 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:10:47.659656    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:10:59 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:10:59.660982    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:11:11 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:11:11.657789    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:11:25 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:11:25.657827    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:11:40 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:11:40.658544    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:11:43 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:11:43.694037    3928 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:11:43 default-k8s-diff-port-249165 kubelet[3928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:11:43 default-k8s-diff-port-249165 kubelet[3928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:11:43 default-k8s-diff-port-249165 kubelet[3928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:11:43 default-k8s-diff-port-249165 kubelet[3928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:11:54 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:11:54.658861    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:12:08 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:12:08.658597    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:12:20 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:12:20.658072    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:12:32 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:12:32.657907    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:12:43 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:12:43.694341    3928 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:12:43 default-k8s-diff-port-249165 kubelet[3928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:12:43 default-k8s-diff-port-249165 kubelet[3928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:12:43 default-k8s-diff-port-249165 kubelet[3928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:12:43 default-k8s-diff-port-249165 kubelet[3928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:12:46 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:12:46.657511    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:12:58 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:12:58.657969    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	
	
	==> storage-provisioner [fec711aaa85924e7057421993cf75a2bf3b977afbbed007f940052128eb02e89] <==
	I0528 22:03:58.686637       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 22:03:58.699325       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 22:03:58.699391       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 22:03:58.715562       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 22:03:58.716604       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-249165_a80cb70d-b310-4cb7-a736-dbef5dd84831!
	I0528 22:03:58.720033       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7bc4f15-7b61-4ca5-a0e6-91a662ae0cb2", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-249165_a80cb70d-b310-4cb7-a736-dbef5dd84831 became leader
	I0528 22:03:58.817594       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-249165_a80cb70d-b310-4cb7-a736-dbef5dd84831!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-249165 -n default-k8s-diff-port-249165
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-249165 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-6q6pz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-249165 describe pod metrics-server-569cc877fc-6q6pz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-249165 describe pod metrics-server-569cc877fc-6q6pz: exit status 1 (62.321038ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-6q6pz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-249165 describe pod metrics-server-569cc877fc-6q6pz: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (147.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:06:36.131532   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:06:55.337587   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:07:37.451660   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
E0528 22:08:20.453270   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.8:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.8:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-499466 -n old-k8s-version-499466
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-499466 -n old-k8s-version-499466: exit status 2 (242.318993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-499466" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-499466 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-499466 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.479µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-499466 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466: exit status 2 (223.878106ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-499466 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-110727                           | enable-default-cni-110727    | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:39 UTC |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:39 UTC | 28 May 24 21:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-290122             | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-595279            | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC | 28 May 24 21:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-499466        | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-290122                  | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-595279                 | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-499466             | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-257793                              | cert-expiration-257793       | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807140 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	|         | disable-driver-mounts-807140                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:50 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-249165  | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC | 28 May 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC |                     |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-249165       | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC | 28 May 24 22:04 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:53:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:53:40.744358   73188 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:53:40.744653   73188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:53:40.744664   73188 out.go:304] Setting ErrFile to fd 2...
	I0528 21:53:40.744668   73188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:53:40.744923   73188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:53:40.745490   73188 out.go:298] Setting JSON to false
	I0528 21:53:40.746663   73188 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5764,"bootTime":1716927457,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:53:40.746723   73188 start.go:139] virtualization: kvm guest
	I0528 21:53:40.749013   73188 out.go:177] * [default-k8s-diff-port-249165] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:53:40.750611   73188 notify.go:220] Checking for updates...
	I0528 21:53:40.750618   73188 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:53:40.752116   73188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:53:40.753384   73188 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:53:40.754612   73188 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:53:40.755846   73188 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:53:40.756972   73188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:53:40.758627   73188 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:53:40.759050   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.759106   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.774337   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I0528 21:53:40.774754   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.775318   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.775344   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.775633   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.775791   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.776007   73188 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:53:40.776327   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.776382   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.790531   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I0528 21:53:40.790970   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.791471   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.791498   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.791802   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.791983   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.826633   73188 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 21:53:40.827847   73188 start.go:297] selected driver: kvm2
	I0528 21:53:40.827863   73188 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:53:40.827981   73188 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:53:40.828705   73188 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:53:40.828777   73188 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 21:53:40.844223   73188 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 21:53:40.844574   73188 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:53:40.844638   73188 cni.go:84] Creating CNI manager for ""
	I0528 21:53:40.844650   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:53:40.844682   73188 start.go:340] cluster config:
	{Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:53:40.844775   73188 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:53:40.846544   73188 out.go:177] * Starting "default-k8s-diff-port-249165" primary control-plane node in "default-k8s-diff-port-249165" cluster
	I0528 21:53:40.847754   73188 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:53:40.847792   73188 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 21:53:40.847801   73188 cache.go:56] Caching tarball of preloaded images
	I0528 21:53:40.847870   73188 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 21:53:40.847880   73188 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 21:53:40.847964   73188 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/config.json ...
	I0528 21:53:40.848196   73188 start.go:360] acquireMachinesLock for default-k8s-diff-port-249165: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:53:40.848256   73188 start.go:364] duration metric: took 38.994µs to acquireMachinesLock for "default-k8s-diff-port-249165"
	I0528 21:53:40.848271   73188 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:53:40.848281   73188 fix.go:54] fixHost starting: 
	I0528 21:53:40.848534   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:53:40.848571   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:53:40.863227   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0528 21:53:40.863708   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:53:40.864162   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:53:40.864182   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:53:40.864616   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:53:40.864794   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.864952   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 21:53:40.866583   73188 fix.go:112] recreateIfNeeded on default-k8s-diff-port-249165: state=Running err=<nil>
	W0528 21:53:40.866600   73188 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:53:40.868382   73188 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-249165" VM ...
	I0528 21:53:38.450836   70002 logs.go:123] Gathering logs for storage-provisioner [9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d] ...
	I0528 21:53:38.450866   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c5ee70d85c3e595c91ab0c4bcfdbd1f1161b2643af86fce85c056fc5d38482d"
	I0528 21:53:38.485575   70002 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:53:38.485610   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:53:38.854290   70002 logs.go:123] Gathering logs for container status ...
	I0528 21:53:38.854325   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:53:38.902357   70002 logs.go:123] Gathering logs for dmesg ...
	I0528 21:53:38.902389   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:53:38.916785   70002 logs.go:123] Gathering logs for etcd [3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c] ...
	I0528 21:53:38.916820   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3047accd150d979b115d9296d1b09bdad9f23c29d7c066800914fe3e6e001d3c"
	I0528 21:53:38.982119   70002 logs.go:123] Gathering logs for kube-apiserver [056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622] ...
	I0528 21:53:38.982148   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056fb79dac85895cd23ca69b8a238157e22babd5bf16ae416bb52d6e9b470622"
	I0528 21:53:39.031038   70002 logs.go:123] Gathering logs for kube-proxy [cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc] ...
	I0528 21:53:39.031066   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfb41c075cb48a9d458fcc2a85aca29b37f56931246adac5b1230661c528edcc"
	I0528 21:53:39.068094   70002 logs.go:123] Gathering logs for kube-controller-manager [b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89] ...
	I0528 21:53:39.068123   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5366e4c2bcdabc25f55a71903840f5a97745fb1bac231cf9c73daaf2b9dab89"
	I0528 21:53:39.129214   70002 logs.go:123] Gathering logs for kubelet ...
	I0528 21:53:39.129248   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:53:39.191483   70002 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:53:39.191523   70002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:53:41.813698   70002 system_pods.go:59] 8 kube-system pods found
	I0528 21:53:41.813725   70002 system_pods.go:61] "coredns-7db6d8ff4d-8cb7b" [b3908d89-cfc6-4f1a-9aef-861aac0d3e29] Running
	I0528 21:53:41.813730   70002 system_pods.go:61] "etcd-embed-certs-595279" [58581274-a239-4367-926e-c333f201d4f8] Running
	I0528 21:53:41.813733   70002 system_pods.go:61] "kube-apiserver-embed-certs-595279" [cc2dd164-709f-4e59-81bc-ce9d30bbced9] Running
	I0528 21:53:41.813736   70002 system_pods.go:61] "kube-controller-manager-embed-certs-595279" [e049af67-fff8-466f-96ff-81d148602884] Running
	I0528 21:53:41.813739   70002 system_pods.go:61] "kube-proxy-pnl5w" [9c2c68bc-42c2-425e-ae35-a8c07b5d5221] Running
	I0528 21:53:41.813742   70002 system_pods.go:61] "kube-scheduler-embed-certs-595279" [ab6bff2a-7266-4c5c-96bc-c87ef15dd342] Running
	I0528 21:53:41.813748   70002 system_pods.go:61] "metrics-server-569cc877fc-f6fz2" [b5e432cd-3b95-4f20-b9b3-c498512a7564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:53:41.813751   70002 system_pods.go:61] "storage-provisioner" [7bf52279-1fbc-40e5-8376-992c545c55dd] Running
	I0528 21:53:41.813771   70002 system_pods.go:74] duration metric: took 3.894565784s to wait for pod list to return data ...
	I0528 21:53:41.813780   70002 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:53:41.816297   70002 default_sa.go:45] found service account: "default"
	I0528 21:53:41.816319   70002 default_sa.go:55] duration metric: took 2.532587ms for default service account to be created ...
	I0528 21:53:41.816326   70002 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:53:41.821407   70002 system_pods.go:86] 8 kube-system pods found
	I0528 21:53:41.821437   70002 system_pods.go:89] "coredns-7db6d8ff4d-8cb7b" [b3908d89-cfc6-4f1a-9aef-861aac0d3e29] Running
	I0528 21:53:41.821447   70002 system_pods.go:89] "etcd-embed-certs-595279" [58581274-a239-4367-926e-c333f201d4f8] Running
	I0528 21:53:41.821453   70002 system_pods.go:89] "kube-apiserver-embed-certs-595279" [cc2dd164-709f-4e59-81bc-ce9d30bbced9] Running
	I0528 21:53:41.821458   70002 system_pods.go:89] "kube-controller-manager-embed-certs-595279" [e049af67-fff8-466f-96ff-81d148602884] Running
	I0528 21:53:41.821461   70002 system_pods.go:89] "kube-proxy-pnl5w" [9c2c68bc-42c2-425e-ae35-a8c07b5d5221] Running
	I0528 21:53:41.821465   70002 system_pods.go:89] "kube-scheduler-embed-certs-595279" [ab6bff2a-7266-4c5c-96bc-c87ef15dd342] Running
	I0528 21:53:41.821472   70002 system_pods.go:89] "metrics-server-569cc877fc-f6fz2" [b5e432cd-3b95-4f20-b9b3-c498512a7564] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:53:41.821480   70002 system_pods.go:89] "storage-provisioner" [7bf52279-1fbc-40e5-8376-992c545c55dd] Running
	I0528 21:53:41.821489   70002 system_pods.go:126] duration metric: took 5.157831ms to wait for k8s-apps to be running ...
	I0528 21:53:41.821498   70002 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:53:41.821538   70002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:53:41.838819   70002 system_svc.go:56] duration metric: took 17.315204ms WaitForService to wait for kubelet
	I0528 21:53:41.838844   70002 kubeadm.go:576] duration metric: took 4m26.419891509s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:53:41.838864   70002 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:53:41.841408   70002 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:53:41.841424   70002 node_conditions.go:123] node cpu capacity is 2
	I0528 21:53:41.841433   70002 node_conditions.go:105] duration metric: took 2.56566ms to run NodePressure ...
	I0528 21:53:41.841445   70002 start.go:240] waiting for startup goroutines ...
	I0528 21:53:41.841452   70002 start.go:245] waiting for cluster config update ...
	I0528 21:53:41.841463   70002 start.go:254] writing updated cluster config ...
	I0528 21:53:41.841709   70002 ssh_runner.go:195] Run: rm -f paused
	I0528 21:53:41.886820   70002 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:53:41.888710   70002 out.go:177] * Done! kubectl is now configured to use "embed-certs-595279" cluster and "default" namespace by default
	I0528 21:53:40.749506   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:43.248909   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:40.869524   73188 machine.go:94] provisionDockerMachine start ...
	I0528 21:53:40.869542   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:53:40.869730   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:53:40.872099   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:53:40.872470   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:53:40.872491   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:53:40.872625   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:53:40.872772   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:53:40.872952   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:53:40.873092   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:53:40.873253   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:53:40.873429   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:53:40.873438   73188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:53:43.770029   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:45.748750   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:48.248904   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:46.841982   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:50.249442   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:52.749680   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:52.922023   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:55.251148   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:57.748960   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:53:55.994071   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:53:59.749114   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:02.248306   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:02.074025   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:05.145996   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:04.248616   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:06.749284   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:09.247806   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:11.249481   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:13.748196   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:12.825536   70393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:54:12.825810   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:12.826159   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:14.266167   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:15.749468   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:18.248675   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:17.826706   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:17.826945   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:17.338025   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:20.248941   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:22.749284   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:23.417971   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:25.248681   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:27.748556   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:27.827370   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:27.827610   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:26.490049   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:29.748865   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:32.248746   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:32.569987   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:35.641969   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:34.249483   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:36.748835   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:38.749264   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:41.251039   69886 pod_ready.go:102] pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace has status "Ready":"False"
	I0528 21:54:43.248816   69886 pod_ready.go:81] duration metric: took 4m0.006582939s for pod "metrics-server-569cc877fc-j2khc" in "kube-system" namespace to be "Ready" ...
	E0528 21:54:43.248839   69886 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0528 21:54:43.248847   69886 pod_ready.go:38] duration metric: took 4m4.041932949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:54:43.248863   69886 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:54:43.248889   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:43.248933   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:43.296609   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:43.296630   69886 cri.go:89] found id: ""
	I0528 21:54:43.296638   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:43.296694   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.301171   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:43.301211   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:43.340772   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:43.340793   69886 cri.go:89] found id: ""
	I0528 21:54:43.340799   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:43.340843   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.345422   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:43.345489   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:43.392432   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:43.392458   69886 cri.go:89] found id: ""
	I0528 21:54:43.392467   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:43.392521   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.396870   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:43.396943   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:43.433491   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:43.433516   69886 cri.go:89] found id: ""
	I0528 21:54:43.433525   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:43.433584   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.438209   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:43.438276   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:43.479257   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:43.479299   69886 cri.go:89] found id: ""
	I0528 21:54:43.479309   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:43.479425   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.484063   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:43.484127   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:43.523360   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:43.523384   69886 cri.go:89] found id: ""
	I0528 21:54:43.523394   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:43.523443   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.527859   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:43.527915   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:43.565610   69886 cri.go:89] found id: ""
	I0528 21:54:43.565631   69886 logs.go:276] 0 containers: []
	W0528 21:54:43.565638   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:43.565643   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:43.565687   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:43.603133   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:43.603155   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:43.603159   69886 cri.go:89] found id: ""
	I0528 21:54:43.603166   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:43.603233   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.607421   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:43.611570   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:43.611593   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:43.656455   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:43.656483   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:43.708385   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:43.708416   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:43.766267   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:43.766300   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:43.813734   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:43.813782   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:43.857289   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:43.857317   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:43.897976   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:43.898001   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:41.721973   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:44.798063   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:44.394070   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:44.394112   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:44.450041   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:44.450078   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:44.464067   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:44.464092   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:44.588402   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:44.588432   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:44.631477   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:44.631505   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:44.676531   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:44.676562   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:47.229026   69886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:54:47.247014   69886 api_server.go:72] duration metric: took 4m15.746572678s to wait for apiserver process to appear ...
	I0528 21:54:47.247043   69886 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:54:47.247085   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:47.247153   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:47.291560   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:47.291592   69886 cri.go:89] found id: ""
	I0528 21:54:47.291602   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:47.291667   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.296538   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:47.296597   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:47.335786   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:47.335809   69886 cri.go:89] found id: ""
	I0528 21:54:47.335817   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:47.335861   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.340222   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:47.340295   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:47.376487   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:47.376518   69886 cri.go:89] found id: ""
	I0528 21:54:47.376528   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:47.376587   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.380986   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:47.381043   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:47.419121   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:47.419144   69886 cri.go:89] found id: ""
	I0528 21:54:47.419151   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:47.419194   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.423323   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:47.423378   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:47.460781   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:47.460806   69886 cri.go:89] found id: ""
	I0528 21:54:47.460813   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:47.460856   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.465054   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:47.465107   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:47.510054   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:47.510077   69886 cri.go:89] found id: ""
	I0528 21:54:47.510085   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:47.510136   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.514707   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:47.514764   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:47.551564   69886 cri.go:89] found id: ""
	I0528 21:54:47.551587   69886 logs.go:276] 0 containers: []
	W0528 21:54:47.551594   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:47.551600   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:47.551647   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:47.591484   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:47.591506   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:47.591511   69886 cri.go:89] found id: ""
	I0528 21:54:47.591520   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:47.591581   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.596620   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:47.600861   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:47.600884   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:48.031181   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:48.031218   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:48.085321   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:48.085354   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:48.135504   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:48.135538   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:48.172440   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:48.172474   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:48.210817   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:48.210849   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:48.248170   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:48.248196   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:48.290905   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:48.290933   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:48.344302   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:48.344333   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:48.363912   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:48.363940   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:48.490794   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:48.490836   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:48.538412   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:48.538443   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:48.574693   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:48.574724   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:47.828383   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:54:47.828686   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:54:51.128492   69886 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0528 21:54:51.132736   69886 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0528 21:54:51.133908   69886 api_server.go:141] control plane version: v1.30.1
	I0528 21:54:51.133927   69886 api_server.go:131] duration metric: took 3.886877047s to wait for apiserver health ...
	I0528 21:54:51.133935   69886 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:54:51.133953   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:54:51.134009   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:54:51.174021   69886 cri.go:89] found id: "42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:51.174042   69886 cri.go:89] found id: ""
	I0528 21:54:51.174049   69886 logs.go:276] 1 containers: [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc]
	I0528 21:54:51.174100   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.179416   69886 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:54:51.179487   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:54:51.218954   69886 cri.go:89] found id: "48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:51.218981   69886 cri.go:89] found id: ""
	I0528 21:54:51.218992   69886 logs.go:276] 1 containers: [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e]
	I0528 21:54:51.219055   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.224849   69886 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:54:51.224920   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:54:51.265274   69886 cri.go:89] found id: "ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:51.265304   69886 cri.go:89] found id: ""
	I0528 21:54:51.265314   69886 logs.go:276] 1 containers: [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b]
	I0528 21:54:51.265388   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.270027   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:54:51.270104   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:54:51.316234   69886 cri.go:89] found id: "e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:51.316259   69886 cri.go:89] found id: ""
	I0528 21:54:51.316269   69886 logs.go:276] 1 containers: [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af]
	I0528 21:54:51.316324   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.320705   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:54:51.320771   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:54:51.358054   69886 cri.go:89] found id: "9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:51.358079   69886 cri.go:89] found id: ""
	I0528 21:54:51.358089   69886 logs.go:276] 1 containers: [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910]
	I0528 21:54:51.358136   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.363687   69886 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:54:51.363753   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:54:51.409441   69886 cri.go:89] found id: "e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:51.409462   69886 cri.go:89] found id: ""
	I0528 21:54:51.409470   69886 logs.go:276] 1 containers: [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a]
	I0528 21:54:51.409517   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.414069   69886 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:54:51.414125   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:54:51.454212   69886 cri.go:89] found id: ""
	I0528 21:54:51.454245   69886 logs.go:276] 0 containers: []
	W0528 21:54:51.454255   69886 logs.go:278] No container was found matching "kindnet"
	I0528 21:54:51.454263   69886 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0528 21:54:51.454324   69886 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0528 21:54:51.492146   69886 cri.go:89] found id: "6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:51.492174   69886 cri.go:89] found id: "912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:51.492181   69886 cri.go:89] found id: ""
	I0528 21:54:51.492190   69886 logs.go:276] 2 containers: [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0]
	I0528 21:54:51.492262   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.497116   69886 ssh_runner.go:195] Run: which crictl
	I0528 21:54:51.501448   69886 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:54:51.501469   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:54:51.871114   69886 logs.go:123] Gathering logs for container status ...
	I0528 21:54:51.871151   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:54:51.918562   69886 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:54:51.918590   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 21:54:52.031780   69886 logs.go:123] Gathering logs for kube-apiserver [42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc] ...
	I0528 21:54:52.031819   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42608327556eaa064fa6e82cef74018360c97f5013dda1cc410971ec3f3efbfc"
	I0528 21:54:52.090798   69886 logs.go:123] Gathering logs for kube-proxy [9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910] ...
	I0528 21:54:52.090827   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a787e20b35dd25ec679d2f1c14ca9037b1950aeb59785718db686b82eec7910"
	I0528 21:54:52.131645   69886 logs.go:123] Gathering logs for kube-controller-manager [e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a] ...
	I0528 21:54:52.131673   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1f2c88b18006067101f4ed4699a4da7c60f1fe9aba5dd0713c9f8a8b823b93a"
	I0528 21:54:52.191137   69886 logs.go:123] Gathering logs for storage-provisioner [6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba] ...
	I0528 21:54:52.191172   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e80571418c7ddbc7f58be60c60bfe34a8d59ae95ffc11c19645f96311aec3ba"
	I0528 21:54:52.241028   69886 logs.go:123] Gathering logs for storage-provisioner [912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0] ...
	I0528 21:54:52.241054   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 912c92cb728e6173086e2c2a5dbf20a65ac9ca38ecb956a36d04b6725c5ca2e0"
	I0528 21:54:52.276075   69886 logs.go:123] Gathering logs for kubelet ...
	I0528 21:54:52.276115   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:54:52.328268   69886 logs.go:123] Gathering logs for dmesg ...
	I0528 21:54:52.328307   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 21:54:52.342509   69886 logs.go:123] Gathering logs for etcd [48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e] ...
	I0528 21:54:52.342542   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48e5c5e140f93ef7dcb03ee8ec54434e1d5cd08148f60d15766534405dfb453e"
	I0528 21:54:52.390934   69886 logs.go:123] Gathering logs for coredns [ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b] ...
	I0528 21:54:52.390980   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ebc2314ec3dcbb7168aff277439e5e74c3845e11e72728dccc07e1263a6a050b"
	I0528 21:54:52.429778   69886 logs.go:123] Gathering logs for kube-scheduler [e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af] ...
	I0528 21:54:52.429809   69886 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d4c1df4c10f384cc42118d6aebd40ff3dbe33260e12fe0d4315da00b9d97af"
	I0528 21:54:54.975461   69886 system_pods.go:59] 8 kube-system pods found
	I0528 21:54:54.975495   69886 system_pods.go:61] "coredns-7db6d8ff4d-fmk2h" [a084dfb5-5818-4244-9052-a9f861b45617] Running
	I0528 21:54:54.975502   69886 system_pods.go:61] "etcd-no-preload-290122" [85b87ff0-50e4-4b23-b4dd-8442068b1af3] Running
	I0528 21:54:54.975508   69886 system_pods.go:61] "kube-apiserver-no-preload-290122" [2e2cecbc-1755-4cdc-8963-ab1e5b73e9f1] Running
	I0528 21:54:54.975514   69886 system_pods.go:61] "kube-controller-manager-no-preload-290122" [9a6218b4-fbd3-46ff-8eb5-a19f5ed9db72] Running
	I0528 21:54:54.975519   69886 system_pods.go:61] "kube-proxy-w45qh" [f962c73d-872d-4f78-a628-267cb0be49bb] Running
	I0528 21:54:54.975524   69886 system_pods.go:61] "kube-scheduler-no-preload-290122" [d191845d-d1ff-45d8-8601-b0395e2752f1] Running
	I0528 21:54:54.975532   69886 system_pods.go:61] "metrics-server-569cc877fc-j2khc" [2254e89c-3a61-4523-99a2-27ec92e73c9a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:54:54.975540   69886 system_pods.go:61] "storage-provisioner" [fc1a5463-05e0-4213-a7a8-2dd7f355ac36] Running
	I0528 21:54:54.975549   69886 system_pods.go:74] duration metric: took 3.841608486s to wait for pod list to return data ...
	I0528 21:54:54.975564   69886 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:54:54.977757   69886 default_sa.go:45] found service account: "default"
	I0528 21:54:54.977794   69886 default_sa.go:55] duration metric: took 2.222664ms for default service account to be created ...
	I0528 21:54:54.977803   69886 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:54:54.982505   69886 system_pods.go:86] 8 kube-system pods found
	I0528 21:54:54.982527   69886 system_pods.go:89] "coredns-7db6d8ff4d-fmk2h" [a084dfb5-5818-4244-9052-a9f861b45617] Running
	I0528 21:54:54.982532   69886 system_pods.go:89] "etcd-no-preload-290122" [85b87ff0-50e4-4b23-b4dd-8442068b1af3] Running
	I0528 21:54:54.982537   69886 system_pods.go:89] "kube-apiserver-no-preload-290122" [2e2cecbc-1755-4cdc-8963-ab1e5b73e9f1] Running
	I0528 21:54:54.982541   69886 system_pods.go:89] "kube-controller-manager-no-preload-290122" [9a6218b4-fbd3-46ff-8eb5-a19f5ed9db72] Running
	I0528 21:54:54.982545   69886 system_pods.go:89] "kube-proxy-w45qh" [f962c73d-872d-4f78-a628-267cb0be49bb] Running
	I0528 21:54:54.982549   69886 system_pods.go:89] "kube-scheduler-no-preload-290122" [d191845d-d1ff-45d8-8601-b0395e2752f1] Running
	I0528 21:54:54.982554   69886 system_pods.go:89] "metrics-server-569cc877fc-j2khc" [2254e89c-3a61-4523-99a2-27ec92e73c9a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:54:54.982559   69886 system_pods.go:89] "storage-provisioner" [fc1a5463-05e0-4213-a7a8-2dd7f355ac36] Running
	I0528 21:54:54.982565   69886 system_pods.go:126] duration metric: took 4.757682ms to wait for k8s-apps to be running ...
	I0528 21:54:54.982571   69886 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:54:54.982611   69886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:54:54.998318   69886 system_svc.go:56] duration metric: took 15.73926ms WaitForService to wait for kubelet
	I0528 21:54:54.998344   69886 kubeadm.go:576] duration metric: took 4m23.497907193s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:54:54.998364   69886 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:54:55.000709   69886 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:54:55.000726   69886 node_conditions.go:123] node cpu capacity is 2
	I0528 21:54:55.000737   69886 node_conditions.go:105] duration metric: took 2.368195ms to run NodePressure ...
	I0528 21:54:55.000747   69886 start.go:240] waiting for startup goroutines ...
	I0528 21:54:55.000754   69886 start.go:245] waiting for cluster config update ...
	I0528 21:54:55.000767   69886 start.go:254] writing updated cluster config ...
	I0528 21:54:55.001043   69886 ssh_runner.go:195] Run: rm -f paused
	I0528 21:54:55.049907   69886 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:54:55.051941   69886 out.go:177] * Done! kubectl is now configured to use "no-preload-290122" cluster and "default" namespace by default
	I0528 21:54:50.874003   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:54:53.946104   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:00.029992   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:03.098014   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:09.177976   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:12.250035   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:18.330105   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:21.402027   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:27.830110   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:55:27.830377   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:55:27.830409   70393 kubeadm.go:309] 
	I0528 21:55:27.830460   70393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:55:27.830496   70393 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:55:27.830504   70393 kubeadm.go:309] 
	I0528 21:55:27.830563   70393 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:55:27.830629   70393 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:55:27.830806   70393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:55:27.830833   70393 kubeadm.go:309] 
	I0528 21:55:27.830939   70393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:55:27.830970   70393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:55:27.830999   70393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:55:27.831006   70393 kubeadm.go:309] 
	I0528 21:55:27.831089   70393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:55:27.831161   70393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:55:27.831168   70393 kubeadm.go:309] 
	I0528 21:55:27.831276   70393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:55:27.831396   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:55:27.831491   70393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:55:27.831586   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:55:27.831597   70393 kubeadm.go:309] 
	I0528 21:55:27.832385   70393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:55:27.832478   70393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:55:27.832569   70393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0528 21:55:27.832707   70393 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0528 21:55:27.832768   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0528 21:55:28.286592   70393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:55:28.301095   70393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:55:28.310856   70393 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:55:28.310875   70393 kubeadm.go:156] found existing configuration files:
	
	I0528 21:55:28.310916   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 21:55:28.319713   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:55:28.319757   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:55:28.328964   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 21:55:28.337404   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:55:28.337456   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:55:28.346480   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 21:55:28.355427   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:55:28.355475   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:55:28.364843   70393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 21:55:28.373821   70393 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:55:28.373874   70393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:55:28.382542   70393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 21:55:28.448539   70393 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0528 21:55:28.448744   70393 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 21:55:28.592911   70393 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 21:55:28.593029   70393 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 21:55:28.593137   70393 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 21:55:28.793805   70393 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 21:55:28.795709   70393 out.go:204]   - Generating certificates and keys ...
	I0528 21:55:28.795786   70393 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 21:55:28.795854   70393 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 21:55:28.795959   70393 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 21:55:28.796055   70393 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0528 21:55:28.796153   70393 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0528 21:55:28.796349   70393 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0528 21:55:28.796467   70393 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0528 21:55:28.796537   70393 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0528 21:55:28.796610   70393 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 21:55:28.796721   70393 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 21:55:28.796768   70393 kubeadm.go:309] [certs] Using the existing "sa" key
	I0528 21:55:28.796847   70393 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 21:55:28.946885   70393 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 21:55:29.128640   70393 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 21:55:29.240490   70393 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 21:55:29.542128   70393 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 21:55:29.563784   70393 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 21:55:29.565927   70393 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 21:55:29.566159   70393 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 21:55:29.711517   70393 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 21:55:27.482003   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:30.554006   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:29.713311   70393 out.go:204]   - Booting up control plane ...
	I0528 21:55:29.713420   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 21:55:29.717970   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 21:55:29.718779   70393 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 21:55:29.719429   70393 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 21:55:29.722781   70393 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0528 21:55:36.633958   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:39.710041   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:45.785968   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:48.861975   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:54.938007   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:55:58.014038   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:04.094039   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:07.162043   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:09.724902   70393 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0528 21:56:09.725334   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:09.725557   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:13.241997   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:14.726408   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:14.726667   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:16.314032   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:22.394150   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:25.465982   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:24.727314   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:24.727592   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:31.546004   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:34.617980   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:40.697993   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:43.770044   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:44.728635   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:56:44.728954   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:56:49.853977   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:52.922083   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:56:59.001998   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:02.073983   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:08.157974   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:11.226001   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:17.305964   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:20.377963   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:24.729385   70393 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0528 21:57:24.729659   70393 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0528 21:57:24.729688   70393 kubeadm.go:309] 
	I0528 21:57:24.729745   70393 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0528 21:57:24.729835   70393 kubeadm.go:309] 		timed out waiting for the condition
	I0528 21:57:24.729856   70393 kubeadm.go:309] 
	I0528 21:57:24.729898   70393 kubeadm.go:309] 	This error is likely caused by:
	I0528 21:57:24.729930   70393 kubeadm.go:309] 		- The kubelet is not running
	I0528 21:57:24.730023   70393 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0528 21:57:24.730030   70393 kubeadm.go:309] 
	I0528 21:57:24.730156   70393 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0528 21:57:24.730212   70393 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0528 21:57:24.730267   70393 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0528 21:57:24.730278   70393 kubeadm.go:309] 
	I0528 21:57:24.730403   70393 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0528 21:57:24.730522   70393 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0528 21:57:24.730533   70393 kubeadm.go:309] 
	I0528 21:57:24.730669   70393 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0528 21:57:24.730788   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0528 21:57:24.730899   70393 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0528 21:57:24.731020   70393 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0528 21:57:24.731039   70393 kubeadm.go:309] 
	I0528 21:57:24.731657   70393 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 21:57:24.731752   70393 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0528 21:57:24.731861   70393 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0528 21:57:24.731942   70393 kubeadm.go:393] duration metric: took 7m57.905523124s to StartCluster
	I0528 21:57:24.731997   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0528 21:57:24.732064   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0528 21:57:24.772889   70393 cri.go:89] found id: ""
	I0528 21:57:24.772916   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.772923   70393 logs.go:278] No container was found matching "kube-apiserver"
	I0528 21:57:24.772929   70393 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0528 21:57:24.772988   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0528 21:57:24.806418   70393 cri.go:89] found id: ""
	I0528 21:57:24.806447   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.806458   70393 logs.go:278] No container was found matching "etcd"
	I0528 21:57:24.806467   70393 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0528 21:57:24.806534   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0528 21:57:24.844994   70393 cri.go:89] found id: ""
	I0528 21:57:24.845020   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.845028   70393 logs.go:278] No container was found matching "coredns"
	I0528 21:57:24.845035   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0528 21:57:24.845098   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0528 21:57:24.880517   70393 cri.go:89] found id: ""
	I0528 21:57:24.880547   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.880558   70393 logs.go:278] No container was found matching "kube-scheduler"
	I0528 21:57:24.880566   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0528 21:57:24.880615   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0528 21:57:24.917534   70393 cri.go:89] found id: ""
	I0528 21:57:24.917561   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.917569   70393 logs.go:278] No container was found matching "kube-proxy"
	I0528 21:57:24.917575   70393 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0528 21:57:24.917624   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0528 21:57:24.952898   70393 cri.go:89] found id: ""
	I0528 21:57:24.952929   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.952940   70393 logs.go:278] No container was found matching "kube-controller-manager"
	I0528 21:57:24.952948   70393 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0528 21:57:24.953011   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0528 21:57:24.994957   70393 cri.go:89] found id: ""
	I0528 21:57:24.994983   70393 logs.go:276] 0 containers: []
	W0528 21:57:24.994990   70393 logs.go:278] No container was found matching "kindnet"
	I0528 21:57:24.994996   70393 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0528 21:57:24.995046   70393 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0528 21:57:25.032594   70393 cri.go:89] found id: ""
	I0528 21:57:25.032617   70393 logs.go:276] 0 containers: []
	W0528 21:57:25.032624   70393 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0528 21:57:25.032633   70393 logs.go:123] Gathering logs for describe nodes ...
	I0528 21:57:25.032645   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0528 21:57:25.112858   70393 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0528 21:57:25.112882   70393 logs.go:123] Gathering logs for CRI-O ...
	I0528 21:57:25.112894   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0528 21:57:25.217748   70393 logs.go:123] Gathering logs for container status ...
	I0528 21:57:25.217792   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 21:57:25.289998   70393 logs.go:123] Gathering logs for kubelet ...
	I0528 21:57:25.290035   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0528 21:57:25.344833   70393 logs.go:123] Gathering logs for dmesg ...
	I0528 21:57:25.344868   70393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0528 21:57:25.360547   70393 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0528 21:57:25.360594   70393 out.go:239] * 
	W0528 21:57:25.360659   70393 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:57:25.360693   70393 out.go:239] * 
	W0528 21:57:25.361545   70393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 21:57:25.365387   70393 out.go:177] 
	W0528 21:57:25.366681   70393 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0528 21:57:25.366731   70393 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0528 21:57:25.366772   70393 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0528 21:57:25.369011   70393 out.go:177] 
	I0528 21:57:26.462093   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:29.530040   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:35.610027   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:38.682076   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:44.762057   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:47.838109   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:53.914000   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:57:56.986078   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:03.066042   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:06.138002   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:12.218031   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:15.290043   73188 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.48:22: connect: no route to host
	I0528 21:58:18.290952   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:58:18.291006   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:18.291338   73188 buildroot.go:166] provisioning hostname "default-k8s-diff-port-249165"
	I0528 21:58:18.291363   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:18.291646   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:18.293181   73188 machine.go:97] duration metric: took 4m37.423637232s to provisionDockerMachine
	I0528 21:58:18.293224   73188 fix.go:56] duration metric: took 4m37.444947597s for fixHost
	I0528 21:58:18.293230   73188 start.go:83] releasing machines lock for "default-k8s-diff-port-249165", held for 4m37.444964638s
	W0528 21:58:18.293245   73188 start.go:713] error starting host: provision: host is not running
	W0528 21:58:18.293337   73188 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0528 21:58:18.293346   73188 start.go:728] Will try again in 5 seconds ...
	I0528 21:58:23.295554   73188 start.go:360] acquireMachinesLock for default-k8s-diff-port-249165: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 21:58:23.295664   73188 start.go:364] duration metric: took 68.737µs to acquireMachinesLock for "default-k8s-diff-port-249165"
	I0528 21:58:23.295686   73188 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:58:23.295692   73188 fix.go:54] fixHost starting: 
	I0528 21:58:23.296036   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:58:23.296059   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:58:23.310971   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33471
	I0528 21:58:23.311354   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:58:23.311769   73188 main.go:141] libmachine: Using API Version  1
	I0528 21:58:23.311791   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:58:23.312072   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:58:23.312279   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:23.312406   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 21:58:23.313815   73188 fix.go:112] recreateIfNeeded on default-k8s-diff-port-249165: state=Stopped err=<nil>
	I0528 21:58:23.313837   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	W0528 21:58:23.313981   73188 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:58:23.315867   73188 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-249165" ...
	I0528 21:58:23.317068   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Start
	I0528 21:58:23.317224   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Ensuring networks are active...
	I0528 21:58:23.317939   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Ensuring network default is active
	I0528 21:58:23.318317   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Ensuring network mk-default-k8s-diff-port-249165 is active
	I0528 21:58:23.318787   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Getting domain xml...
	I0528 21:58:23.319512   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Creating domain...
	I0528 21:58:24.556897   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting to get IP...
	I0528 21:58:24.557688   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.558217   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.558288   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:24.558188   74350 retry.go:31] will retry after 274.96624ms: waiting for machine to come up
	I0528 21:58:24.834950   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.835591   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:24.835621   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:24.835547   74350 retry.go:31] will retry after 271.693151ms: waiting for machine to come up
	I0528 21:58:25.109193   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.109736   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.109782   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:25.109675   74350 retry.go:31] will retry after 381.434148ms: waiting for machine to come up
	I0528 21:58:25.493383   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.493853   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.493880   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:25.493784   74350 retry.go:31] will retry after 384.034489ms: waiting for machine to come up
	I0528 21:58:25.879289   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.879822   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:25.879854   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:25.879749   74350 retry.go:31] will retry after 517.483073ms: waiting for machine to come up
	I0528 21:58:26.398450   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:26.399012   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:26.399089   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:26.399010   74350 retry.go:31] will retry after 757.371702ms: waiting for machine to come up
	I0528 21:58:27.157490   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:27.158014   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:27.158044   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:27.157971   74350 retry.go:31] will retry after 1.042611523s: waiting for machine to come up
	I0528 21:58:28.201704   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:28.202196   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:28.202229   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:28.202140   74350 retry.go:31] will retry after 1.287212665s: waiting for machine to come up
	I0528 21:58:29.490908   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:29.491356   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:29.491386   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:29.491287   74350 retry.go:31] will retry after 1.576442022s: waiting for machine to come up
	I0528 21:58:31.069493   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:31.069966   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:31.069995   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:31.069917   74350 retry.go:31] will retry after 2.245383669s: waiting for machine to come up
	I0528 21:58:33.317217   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:33.317670   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:33.317701   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:33.317608   74350 retry.go:31] will retry after 2.415705908s: waiting for machine to come up
	I0528 21:58:35.736148   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:35.736526   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:35.736549   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:35.736486   74350 retry.go:31] will retry after 3.463330934s: waiting for machine to come up
	I0528 21:58:39.201369   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:39.201852   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | unable to find current IP address of domain default-k8s-diff-port-249165 in network mk-default-k8s-diff-port-249165
	I0528 21:58:39.201885   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | I0528 21:58:39.201819   74350 retry.go:31] will retry after 4.496481714s: waiting for machine to come up
	I0528 21:58:43.699313   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.699760   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Found IP for machine: 192.168.72.48
	I0528 21:58:43.699783   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Reserving static IP address...
	I0528 21:58:43.699801   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has current primary IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.700262   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Reserved static IP address: 192.168.72.48
	I0528 21:58:43.700280   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Waiting for SSH to be available...
	I0528 21:58:43.700295   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-249165", mac: "52:54:00:f4:fc:a4", ip: "192.168.72.48"} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.700339   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | skip adding static IP to network mk-default-k8s-diff-port-249165 - found existing host DHCP lease matching {name: "default-k8s-diff-port-249165", mac: "52:54:00:f4:fc:a4", ip: "192.168.72.48"}
	I0528 21:58:43.700362   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Getting to WaitForSSH function...
	I0528 21:58:43.702496   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.702910   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.702941   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.703104   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Using SSH client type: external
	I0528 21:58:43.703126   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa (-rw-------)
	I0528 21:58:43.703169   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 21:58:43.703185   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | About to run SSH command:
	I0528 21:58:43.703211   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | exit 0
	I0528 21:58:43.825921   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | SSH cmd err, output: <nil>: 
	I0528 21:58:43.826314   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetConfigRaw
	I0528 21:58:43.826989   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:43.829337   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.829663   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.829685   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.829993   73188 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/config.json ...
	I0528 21:58:43.830227   73188 machine.go:94] provisionDockerMachine start ...
	I0528 21:58:43.830259   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:43.830499   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:43.832840   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.833193   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.833222   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.833382   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:43.833551   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.833687   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.833820   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:43.833977   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:43.834147   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:43.834156   73188 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:58:43.938159   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 21:58:43.938191   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:43.938426   73188 buildroot.go:166] provisioning hostname "default-k8s-diff-port-249165"
	I0528 21:58:43.938472   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:43.938684   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:43.941594   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.941986   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:43.942016   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:43.942195   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:43.942393   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.942550   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:43.942742   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:43.942913   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:43.943069   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:43.943082   73188 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-249165 && echo "default-k8s-diff-port-249165" | sudo tee /etc/hostname
	I0528 21:58:44.060923   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-249165
	
	I0528 21:58:44.060955   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.063621   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.063974   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.064008   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.064132   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.064326   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.064508   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.064660   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.064818   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:44.064999   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:44.065016   73188 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-249165' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-249165/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-249165' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:58:44.174464   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:58:44.174491   73188 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 21:58:44.174524   73188 buildroot.go:174] setting up certificates
	I0528 21:58:44.174538   73188 provision.go:84] configureAuth start
	I0528 21:58:44.174549   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetMachineName
	I0528 21:58:44.174838   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:44.177623   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.178024   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.178052   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.178250   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.180956   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.181305   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.181334   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.181500   73188 provision.go:143] copyHostCerts
	I0528 21:58:44.181571   73188 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 21:58:44.181582   73188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 21:58:44.181643   73188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 21:58:44.181753   73188 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 21:58:44.181787   73188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 21:58:44.181819   73188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 21:58:44.181892   73188 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 21:58:44.181899   73188 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 21:58:44.181920   73188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 21:58:44.181984   73188 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-249165 san=[127.0.0.1 192.168.72.48 default-k8s-diff-port-249165 localhost minikube]
	I0528 21:58:44.490074   73188 provision.go:177] copyRemoteCerts
	I0528 21:58:44.490127   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:58:44.490150   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.492735   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.493121   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.493156   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.493306   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.493526   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.493690   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.493845   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:44.575620   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 21:58:44.601185   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:58:44.625266   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0528 21:58:44.648243   73188 provision.go:87] duration metric: took 473.69068ms to configureAuth
	I0528 21:58:44.648271   73188 buildroot.go:189] setting minikube options for container-runtime
	I0528 21:58:44.648430   73188 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:58:44.648502   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.651430   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.651793   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.651820   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.651960   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.652140   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.652277   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.652436   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.652592   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:44.652762   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:44.652777   73188 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 21:58:44.923577   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 21:58:44.923597   73188 machine.go:97] duration metric: took 1.093358522s to provisionDockerMachine
	I0528 21:58:44.923607   73188 start.go:293] postStartSetup for "default-k8s-diff-port-249165" (driver="kvm2")
	I0528 21:58:44.923618   73188 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:58:44.923649   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:44.924030   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:58:44.924124   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:44.926704   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.927009   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:44.927038   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:44.927162   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:44.927347   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:44.927491   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:44.927627   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:45.009429   73188 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:58:45.014007   73188 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 21:58:45.014032   73188 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 21:58:45.014094   73188 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 21:58:45.014161   73188 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 21:58:45.014265   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 21:58:45.024039   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:58:45.050461   73188 start.go:296] duration metric: took 126.842658ms for postStartSetup
	I0528 21:58:45.050497   73188 fix.go:56] duration metric: took 21.754803931s for fixHost
	I0528 21:58:45.050519   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:45.053312   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.053639   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.053671   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.053821   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:45.054025   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.054198   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.054339   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:45.054475   73188 main.go:141] libmachine: Using SSH client type: native
	I0528 21:58:45.054646   73188 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0528 21:58:45.054657   73188 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 21:58:45.159430   73188 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716933525.136417037
	
	I0528 21:58:45.159460   73188 fix.go:216] guest clock: 1716933525.136417037
	I0528 21:58:45.159470   73188 fix.go:229] Guest: 2024-05-28 21:58:45.136417037 +0000 UTC Remote: 2024-05-28 21:58:45.05050169 +0000 UTC m=+304.341994853 (delta=85.915347ms)
	I0528 21:58:45.159495   73188 fix.go:200] guest clock delta is within tolerance: 85.915347ms
	I0528 21:58:45.159502   73188 start.go:83] releasing machines lock for "default-k8s-diff-port-249165", held for 21.863825672s
	I0528 21:58:45.159552   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.159830   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:45.162709   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.163053   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.163089   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.163264   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.163717   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.163931   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 21:58:45.164028   73188 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:58:45.164072   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:45.164139   73188 ssh_runner.go:195] Run: cat /version.json
	I0528 21:58:45.164164   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 21:58:45.167063   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167215   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167477   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.167505   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167534   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:45.167551   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:45.167605   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:45.167811   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 21:58:45.167826   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.167992   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:45.167998   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 21:58:45.168132   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 21:58:45.168152   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:45.168279   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 21:58:45.243473   73188 ssh_runner.go:195] Run: systemctl --version
	I0528 21:58:45.275272   73188 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 21:58:45.416616   73188 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 21:58:45.423144   73188 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 21:58:45.423203   73188 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:58:45.438939   73188 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 21:58:45.438963   73188 start.go:494] detecting cgroup driver to use...
	I0528 21:58:45.439035   73188 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 21:58:45.454944   73188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 21:58:45.469976   73188 docker.go:217] disabling cri-docker service (if available) ...
	I0528 21:58:45.470031   73188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 21:58:45.484152   73188 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 21:58:45.497541   73188 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 21:58:45.622055   73188 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 21:58:45.760388   73188 docker.go:233] disabling docker service ...
	I0528 21:58:45.760472   73188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 21:58:45.779947   73188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 21:58:45.794310   73188 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 21:58:45.926921   73188 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 21:58:46.042042   73188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 21:58:46.055486   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:58:46.074285   73188 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 21:58:46.074347   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.084646   73188 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 21:58:46.084709   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.094701   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.104877   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.115549   73188 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:58:46.125973   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.136293   73188 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.153570   73188 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 21:58:46.165428   73188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:58:46.175167   73188 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 21:58:46.175224   73188 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 21:58:46.189687   73188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:58:46.199630   73188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:58:46.322596   73188 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 21:58:46.465841   73188 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 21:58:46.465905   73188 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 21:58:46.471249   73188 start.go:562] Will wait 60s for crictl version
	I0528 21:58:46.471301   73188 ssh_runner.go:195] Run: which crictl
	I0528 21:58:46.474963   73188 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:58:46.514028   73188 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 21:58:46.514111   73188 ssh_runner.go:195] Run: crio --version
	I0528 21:58:46.544060   73188 ssh_runner.go:195] Run: crio --version
	I0528 21:58:46.577448   73188 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 21:58:46.578815   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetIP
	I0528 21:58:46.581500   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:46.581876   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 21:58:46.581918   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 21:58:46.582081   73188 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0528 21:58:46.586277   73188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:58:46.599163   73188 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:58:46.599265   73188 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 21:58:46.599308   73188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:58:46.636824   73188 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0528 21:58:46.636895   73188 ssh_runner.go:195] Run: which lz4
	I0528 21:58:46.640890   73188 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 21:58:46.645433   73188 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 21:58:46.645457   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0528 21:58:48.069572   73188 crio.go:462] duration metric: took 1.428706508s to copy over tarball
	I0528 21:58:48.069660   73188 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 21:58:50.289428   73188 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.2197347s)
	I0528 21:58:50.289459   73188 crio.go:469] duration metric: took 2.219854472s to extract the tarball
	I0528 21:58:50.289466   73188 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 21:58:50.329649   73188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 21:58:50.373900   73188 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 21:58:50.373922   73188 cache_images.go:84] Images are preloaded, skipping loading
	I0528 21:58:50.373928   73188 kubeadm.go:928] updating node { 192.168.72.48 8444 v1.30.1 crio true true} ...
	I0528 21:58:50.374059   73188 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-249165 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:58:50.374142   73188 ssh_runner.go:195] Run: crio config
	I0528 21:58:50.430538   73188 cni.go:84] Creating CNI manager for ""
	I0528 21:58:50.430573   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:58:50.430590   73188 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:58:50.430618   73188 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.48 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-249165 NodeName:default-k8s-diff-port-249165 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 21:58:50.430754   73188 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.48
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-249165"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:58:50.430822   73188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 21:58:50.440906   73188 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:58:50.440961   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:58:50.450354   73188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0528 21:58:50.467008   73188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:58:50.483452   73188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0528 21:58:50.500551   73188 ssh_runner.go:195] Run: grep 192.168.72.48	control-plane.minikube.internal$ /etc/hosts
	I0528 21:58:50.504597   73188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 21:58:50.516659   73188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:58:50.634433   73188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:58:50.651819   73188 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165 for IP: 192.168.72.48
	I0528 21:58:50.651844   73188 certs.go:194] generating shared ca certs ...
	I0528 21:58:50.651868   73188 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:58:50.652040   73188 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 21:58:50.652109   73188 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 21:58:50.652124   73188 certs.go:256] generating profile certs ...
	I0528 21:58:50.652223   73188 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/client.key
	I0528 21:58:50.652298   73188 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/apiserver.key.3e2f4fca
	I0528 21:58:50.652351   73188 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/proxy-client.key
	I0528 21:58:50.652505   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 21:58:50.652546   73188 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 21:58:50.652558   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 21:58:50.652589   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 21:58:50.652617   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:58:50.652645   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 21:58:50.652687   73188 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 21:58:50.653356   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:58:50.687329   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:58:50.731844   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:58:50.758921   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:58:50.793162   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0528 21:58:50.820772   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 21:58:50.849830   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:58:50.875695   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/default-k8s-diff-port-249165/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 21:58:50.900876   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 21:58:50.925424   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:58:50.949453   73188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 21:58:50.973597   73188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:58:50.990297   73188 ssh_runner.go:195] Run: openssl version
	I0528 21:58:50.996164   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 21:58:51.007959   73188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 21:58:51.012987   73188 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 21:58:51.013062   73188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 21:58:51.019526   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 21:58:51.031068   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 21:58:51.043064   73188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 21:58:51.048507   73188 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 21:58:51.048600   73188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 21:58:51.054818   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 21:58:51.065829   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:58:51.076414   73188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:58:51.081090   73188 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:58:51.081141   73188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:58:51.086736   73188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:58:51.096968   73188 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:58:51.101288   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 21:58:51.107082   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 21:58:51.112759   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 21:58:51.118504   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 21:58:51.124067   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 21:58:51.129783   73188 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 21:58:51.135390   73188 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-249165 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-249165 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:58:51.135521   73188 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 21:58:51.135583   73188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:58:51.173919   73188 cri.go:89] found id: ""
	I0528 21:58:51.173995   73188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0528 21:58:51.184361   73188 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 21:58:51.184381   73188 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 21:58:51.184386   73188 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 21:58:51.184424   73188 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 21:58:51.194386   73188 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:58:51.195726   73188 kubeconfig.go:125] found "default-k8s-diff-port-249165" server: "https://192.168.72.48:8444"
	I0528 21:58:51.198799   73188 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 21:58:51.208118   73188 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.48
	I0528 21:58:51.208146   73188 kubeadm.go:1154] stopping kube-system containers ...
	I0528 21:58:51.208157   73188 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0528 21:58:51.208193   73188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 21:58:51.252026   73188 cri.go:89] found id: ""
	I0528 21:58:51.252089   73188 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 21:58:51.269404   73188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:58:51.279728   73188 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 21:58:51.279744   73188 kubeadm.go:156] found existing configuration files:
	
	I0528 21:58:51.279790   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0528 21:58:51.289352   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 21:58:51.289396   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 21:58:51.299059   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0528 21:58:51.308375   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 21:58:51.308425   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 21:58:51.317866   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0528 21:58:51.327433   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 21:58:51.327488   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:58:51.337148   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0528 21:58:51.346358   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 21:58:51.346410   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:58:51.355689   73188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:58:51.365235   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:51.488772   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.553360   73188 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.064544437s)
	I0528 21:58:52.553398   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.780281   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.839188   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:58:52.914117   73188 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:58:52.914222   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:58:53.415170   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:58:53.914987   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:58:53.933842   73188 api_server.go:72] duration metric: took 1.019725255s to wait for apiserver process to appear ...
	I0528 21:58:53.933869   73188 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:58:53.933886   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:53.934358   73188 api_server.go:269] stopped: https://192.168.72.48:8444/healthz: Get "https://192.168.72.48:8444/healthz": dial tcp 192.168.72.48:8444: connect: connection refused
	I0528 21:58:54.434146   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:56.813345   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:58:56.813384   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:58:56.813396   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:56.821906   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:58:56.821935   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:58:56.934069   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:56.941002   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:56.941034   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:57.434777   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:57.439312   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:57.439345   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:57.934912   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:57.941171   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:57.941201   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:58.434198   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:58.438164   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:58.438190   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:58.934813   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:58.939873   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:58.939899   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:59.434373   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:59.438639   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:59.438662   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:58:59.934909   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:58:59.940297   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:58:59.940331   73188 api_server.go:103] status: https://192.168.72.48:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:59:00.434920   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 21:59:00.440734   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 200:
	ok
	I0528 21:59:00.447107   73188 api_server.go:141] control plane version: v1.30.1
	I0528 21:59:00.447129   73188 api_server.go:131] duration metric: took 6.513254325s to wait for apiserver health ...
	I0528 21:59:00.447137   73188 cni.go:84] Creating CNI manager for ""
	I0528 21:59:00.447143   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 21:59:00.449008   73188 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 21:59:00.450184   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 21:59:00.461520   73188 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 21:59:00.480494   73188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:59:00.491722   73188 system_pods.go:59] 8 kube-system pods found
	I0528 21:59:00.491755   73188 system_pods.go:61] "coredns-7db6d8ff4d-qk6tz" [d3250a5a-2eda-41d3-86e2-227e85da8cb6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 21:59:00.491764   73188 system_pods.go:61] "etcd-default-k8s-diff-port-249165" [e1179b11-47b9-4803-91bb-a8d8470aac40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 21:59:00.491771   73188 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-249165" [7f6c0680-8827-4f15-90e5-f8d9e1d1bc8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 21:59:00.491780   73188 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-249165" [4d6f8bb3-0f4b-41fa-9b02-3b2c79513bf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 21:59:00.491786   73188 system_pods.go:61] "kube-proxy-fvmjv" [df55e25a-a79a-4293-9636-31f5ebc4fc77] Running
	I0528 21:59:00.491791   73188 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-249165" [82200561-6687-448d-b73f-d0e047dec773] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 21:59:00.491797   73188 system_pods.go:61] "metrics-server-569cc877fc-k2q4p" [d1ec23de-6293-42a8-80f3-e28e007b6a34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 21:59:00.491802   73188 system_pods.go:61] "storage-provisioner" [1f84dc9c-6b4e-44c9-82a2-5dabcb0b2178] Running
	I0528 21:59:00.491808   73188 system_pods.go:74] duration metric: took 11.287283ms to wait for pod list to return data ...
	I0528 21:59:00.491817   73188 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:59:00.495098   73188 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 21:59:00.495124   73188 node_conditions.go:123] node cpu capacity is 2
	I0528 21:59:00.495135   73188 node_conditions.go:105] duration metric: took 3.313626ms to run NodePressure ...
	I0528 21:59:00.495151   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:59:00.782161   73188 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0528 21:59:00.786287   73188 kubeadm.go:733] kubelet initialised
	I0528 21:59:00.786308   73188 kubeadm.go:734] duration metric: took 4.112496ms waiting for restarted kubelet to initialise ...
	I0528 21:59:00.786316   73188 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:59:00.790951   73188 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.795459   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.795486   73188 pod_ready.go:81] duration metric: took 4.510715ms for pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.795496   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "coredns-7db6d8ff4d-qk6tz" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.795505   73188 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.799372   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.799395   73188 pod_ready.go:81] duration metric: took 3.878119ms for pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.799405   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.799412   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.803708   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.803732   73188 pod_ready.go:81] duration metric: took 4.312817ms for pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.803744   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.803752   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:00.883526   73188 pod_ready.go:97] node "default-k8s-diff-port-249165" hosting pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.883552   73188 pod_ready.go:81] duration metric: took 79.787719ms for pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	E0528 21:59:00.883562   73188 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-249165" hosting pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-249165" has status "Ready":"False"
	I0528 21:59:00.883569   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fvmjv" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:01.284553   73188 pod_ready.go:92] pod "kube-proxy-fvmjv" in "kube-system" namespace has status "Ready":"True"
	I0528 21:59:01.284580   73188 pod_ready.go:81] duration metric: took 401.003384ms for pod "kube-proxy-fvmjv" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:01.284590   73188 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:03.293222   73188 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:04.291145   73188 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"True"
	I0528 21:59:04.291171   73188 pod_ready.go:81] duration metric: took 3.006571778s for pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:04.291183   73188 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace to be "Ready" ...
	I0528 21:59:06.297256   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:08.299092   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:10.797261   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:12.797546   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:15.297532   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:17.297769   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:19.298152   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:21.797794   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:24.298073   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:26.797503   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:29.297699   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:31.298091   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:33.799278   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:36.298358   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:38.298659   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:40.797501   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:43.297098   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:45.297322   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:47.798004   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:49.798749   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:52.296950   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:54.297779   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:56.297921   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 21:59:58.797953   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:01.297566   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:03.302555   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:05.797610   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:07.797893   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:09.798237   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:12.297953   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:14.298232   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:16.798660   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:19.296867   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:21.297325   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:23.797687   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:26.298657   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:28.798073   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:31.299219   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:33.800018   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:36.297914   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:38.297984   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:40.796919   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:42.798156   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:44.800231   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:47.297425   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:49.800316   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:52.297415   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:54.297549   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:56.798787   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:00:59.297851   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:01.298008   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:03.298732   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:05.797817   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:07.797913   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:10.297286   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:12.797866   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:14.799144   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:17.297592   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:19.298065   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:21.797973   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:23.798794   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:26.298087   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:28.300587   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:30.797976   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:33.297574   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:35.298403   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:37.797436   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:40.300414   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:42.797172   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:45.297340   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:47.297684   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:49.298815   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:51.299597   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:53.798447   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:56.297483   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:01:58.298264   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:00.798507   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:03.297276   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:05.299518   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:07.799770   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:10.300402   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:12.796971   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:14.798057   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:16.798315   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:18.800481   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:21.298816   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:23.797133   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:25.798165   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:28.297030   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:30.797031   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:32.797960   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:34.798334   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:37.298013   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:39.797122   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:42.297054   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:44.297976   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:46.797135   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:48.797338   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:50.797608   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:53.299621   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:55.797973   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:57.798174   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:00.298537   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:02.796804   73188 pod_ready.go:102] pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:04.291841   73188 pod_ready.go:81] duration metric: took 4m0.000641837s for pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace to be "Ready" ...
	E0528 22:03:04.291876   73188 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-k2q4p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0528 22:03:04.291893   73188 pod_ready.go:38] duration metric: took 4m3.505569148s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:03:04.291917   73188 kubeadm.go:591] duration metric: took 4m13.107527237s to restartPrimaryControlPlane
	W0528 22:03:04.291969   73188 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0528 22:03:04.291999   73188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0528 22:03:35.997887   73188 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.705862339s)
	I0528 22:03:35.997980   73188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 22:03:36.013927   73188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 22:03:36.023856   73188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 22:03:36.033329   73188 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 22:03:36.033349   73188 kubeadm.go:156] found existing configuration files:
	
	I0528 22:03:36.033385   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0528 22:03:36.042504   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 22:03:36.042555   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 22:03:36.051990   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0528 22:03:36.061602   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 22:03:36.061672   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 22:03:36.071582   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0528 22:03:36.081217   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 22:03:36.081289   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 22:03:36.091380   73188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0528 22:03:36.101427   73188 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 22:03:36.101491   73188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 22:03:36.111166   73188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0528 22:03:36.167427   73188 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 22:03:36.167584   73188 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 22:03:36.319657   73188 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 22:03:36.319762   73188 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 22:03:36.319861   73188 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 22:03:36.570417   73188 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 22:03:36.572409   73188 out.go:204]   - Generating certificates and keys ...
	I0528 22:03:36.572503   73188 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 22:03:36.572615   73188 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 22:03:36.572723   73188 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0528 22:03:36.572801   73188 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0528 22:03:36.572895   73188 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0528 22:03:36.572944   73188 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0528 22:03:36.572999   73188 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0528 22:03:36.573087   73188 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0528 22:03:36.573192   73188 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0528 22:03:36.573348   73188 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0528 22:03:36.573818   73188 kubeadm.go:309] [certs] Using the existing "sa" key
	I0528 22:03:36.573889   73188 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 22:03:36.671532   73188 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 22:03:36.741211   73188 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 22:03:36.908326   73188 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 22:03:37.058636   73188 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 22:03:37.237907   73188 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 22:03:37.238660   73188 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 22:03:37.242660   73188 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 22:03:37.244632   73188 out.go:204]   - Booting up control plane ...
	I0528 22:03:37.244721   73188 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 22:03:37.244790   73188 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 22:03:37.244999   73188 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 22:03:37.267448   73188 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 22:03:37.268482   73188 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 22:03:37.268550   73188 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 22:03:37.405936   73188 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 22:03:37.406050   73188 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 22:03:37.907833   73188 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.378139ms
	I0528 22:03:37.907936   73188 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 22:03:42.910213   73188 kubeadm.go:309] [api-check] The API server is healthy after 5.00224578s
	I0528 22:03:42.926650   73188 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 22:03:42.943917   73188 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 22:03:42.972044   73188 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 22:03:42.972264   73188 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-249165 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 22:03:42.986882   73188 kubeadm.go:309] [bootstrap-token] Using token: cf4624.vgyi0c4jykmr5x8u
	I0528 22:03:42.988295   73188 out.go:204]   - Configuring RBAC rules ...
	I0528 22:03:42.988438   73188 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 22:03:42.994583   73188 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 22:03:43.003191   73188 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 22:03:43.007110   73188 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 22:03:43.014038   73188 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 22:03:43.022358   73188 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 22:03:43.322836   73188 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 22:03:43.790286   73188 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 22:03:44.317555   73188 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 22:03:44.318811   73188 kubeadm.go:309] 
	I0528 22:03:44.318906   73188 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 22:03:44.318933   73188 kubeadm.go:309] 
	I0528 22:03:44.319041   73188 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 22:03:44.319052   73188 kubeadm.go:309] 
	I0528 22:03:44.319073   73188 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 22:03:44.319128   73188 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 22:03:44.319171   73188 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 22:03:44.319178   73188 kubeadm.go:309] 
	I0528 22:03:44.319333   73188 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 22:03:44.319349   73188 kubeadm.go:309] 
	I0528 22:03:44.319390   73188 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 22:03:44.319395   73188 kubeadm.go:309] 
	I0528 22:03:44.319437   73188 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 22:03:44.319501   73188 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 22:03:44.319597   73188 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 22:03:44.319617   73188 kubeadm.go:309] 
	I0528 22:03:44.319758   73188 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 22:03:44.319881   73188 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 22:03:44.319894   73188 kubeadm.go:309] 
	I0528 22:03:44.320006   73188 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token cf4624.vgyi0c4jykmr5x8u \
	I0528 22:03:44.320098   73188 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb \
	I0528 22:03:44.320118   73188 kubeadm.go:309] 	--control-plane 
	I0528 22:03:44.320125   73188 kubeadm.go:309] 
	I0528 22:03:44.320201   73188 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 22:03:44.320209   73188 kubeadm.go:309] 
	I0528 22:03:44.320284   73188 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token cf4624.vgyi0c4jykmr5x8u \
	I0528 22:03:44.320405   73188 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:1de7c92f7c6fda1d3adee02fe3e58e442e8ad50d9d224864ca8764aa1a3ae8bb 
	I0528 22:03:44.320885   73188 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 22:03:44.320929   73188 cni.go:84] Creating CNI manager for ""
	I0528 22:03:44.320945   73188 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 22:03:44.322688   73188 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 22:03:44.323999   73188 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 22:03:44.335532   73188 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 22:03:44.356272   73188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 22:03:44.356380   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:44.356387   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-249165 minikube.k8s.io/updated_at=2024_05_28T22_03_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=default-k8s-diff-port-249165 minikube.k8s.io/primary=true
	I0528 22:03:44.384624   73188 ops.go:34] apiserver oom_adj: -16
	I0528 22:03:44.563265   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:45.063599   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:45.563789   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:46.063279   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:46.564010   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:47.063573   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:47.563386   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:48.064282   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:48.563854   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:49.063459   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:49.564059   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:50.064286   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:50.564237   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:51.063435   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:51.563256   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:52.063661   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:52.563554   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:53.063681   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:53.563368   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:54.063863   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:54.563426   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:55.063793   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:55.564268   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:56.063717   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:56.563689   73188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:03:56.664824   73188 kubeadm.go:1107] duration metric: took 12.308506231s to wait for elevateKubeSystemPrivileges
	W0528 22:03:56.664873   73188 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 22:03:56.664885   73188 kubeadm.go:393] duration metric: took 5m5.529497247s to StartCluster
	I0528 22:03:56.664908   73188 settings.go:142] acquiring lock: {Name:mkc1182212d780ab38539839372c25eb989b2e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:03:56.664987   73188 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 22:03:56.667020   73188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:03:56.667272   73188 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.48 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 22:03:56.669019   73188 out.go:177] * Verifying Kubernetes components...
	I0528 22:03:56.667382   73188 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 22:03:56.667455   73188 config.go:182] Loaded profile config "default-k8s-diff-port-249165": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:03:56.672619   73188 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-249165"
	I0528 22:03:56.672634   73188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:03:56.672634   73188 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-249165"
	I0528 22:03:56.672659   73188 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-249165"
	I0528 22:03:56.672665   73188 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-249165"
	W0528 22:03:56.672671   73188 addons.go:243] addon storage-provisioner should already be in state true
	W0528 22:03:56.672673   73188 addons.go:243] addon metrics-server should already be in state true
	I0528 22:03:56.672696   73188 host.go:66] Checking if "default-k8s-diff-port-249165" exists ...
	I0528 22:03:56.672699   73188 host.go:66] Checking if "default-k8s-diff-port-249165" exists ...
	I0528 22:03:56.672625   73188 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-249165"
	I0528 22:03:56.672741   73188 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-249165"
	I0528 22:03:56.672973   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.672993   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.673010   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.673026   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.673163   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.673194   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.689257   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38053
	I0528 22:03:56.689499   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I0528 22:03:56.689836   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.689955   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.690383   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.690403   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.690538   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.690555   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.690738   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.690899   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.691287   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.691323   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.691754   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.691785   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.692291   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32897
	I0528 22:03:56.692685   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.693220   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.693245   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.693626   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.693856   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 22:03:56.697987   73188 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-249165"
	W0528 22:03:56.698008   73188 addons.go:243] addon default-storageclass should already be in state true
	I0528 22:03:56.698037   73188 host.go:66] Checking if "default-k8s-diff-port-249165" exists ...
	I0528 22:03:56.698396   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.698440   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.707841   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I0528 22:03:56.708297   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.710004   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.710031   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.710055   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0528 22:03:56.710537   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.710741   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 22:03:56.710818   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.711308   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.711333   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.711655   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.711830   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 22:03:56.713789   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 22:03:56.716114   73188 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:03:56.714205   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 22:03:56.717642   73188 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:03:56.717661   73188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 22:03:56.717682   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 22:03:56.719665   73188 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0528 22:03:56.720996   73188 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0528 22:03:56.721011   73188 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0528 22:03:56.721026   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 22:03:56.720668   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 22:03:56.721097   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 22:03:56.721113   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 22:03:56.721212   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 22:03:56.721387   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 22:03:56.721521   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 22:03:56.721654   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 22:03:56.724508   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 22:03:56.724964   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 22:03:56.725036   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 22:03:56.725075   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0528 22:03:56.725301   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 22:03:56.725445   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.725458   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 22:03:56.725595   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 22:03:56.725728   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 22:03:56.725960   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.725976   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.726329   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.726874   73188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:03:56.726907   73188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:03:56.742977   73188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35273
	I0528 22:03:56.743565   73188 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:03:56.744141   73188 main.go:141] libmachine: Using API Version  1
	I0528 22:03:56.744156   73188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:03:56.744585   73188 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:03:56.744742   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetState
	I0528 22:03:56.746660   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .DriverName
	I0528 22:03:56.746937   73188 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 22:03:56.746953   73188 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 22:03:56.746975   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHHostname
	I0528 22:03:56.749996   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 22:03:56.750477   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:fc:a4", ip: ""} in network mk-default-k8s-diff-port-249165: {Iface:virbr4 ExpiryTime:2024-05-28 22:49:33 +0000 UTC Type:0 Mac:52:54:00:f4:fc:a4 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:default-k8s-diff-port-249165 Clientid:01:52:54:00:f4:fc:a4}
	I0528 22:03:56.750505   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | domain default-k8s-diff-port-249165 has defined IP address 192.168.72.48 and MAC address 52:54:00:f4:fc:a4 in network mk-default-k8s-diff-port-249165
	I0528 22:03:56.750680   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHPort
	I0528 22:03:56.750834   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHKeyPath
	I0528 22:03:56.750977   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .GetSSHUsername
	I0528 22:03:56.751108   73188 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/default-k8s-diff-port-249165/id_rsa Username:docker}
	I0528 22:03:56.917578   73188 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:03:56.948739   73188 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-249165" to be "Ready" ...
	I0528 22:03:56.960279   73188 node_ready.go:49] node "default-k8s-diff-port-249165" has status "Ready":"True"
	I0528 22:03:56.960331   73188 node_ready.go:38] duration metric: took 11.549106ms for node "default-k8s-diff-port-249165" to be "Ready" ...
	I0528 22:03:56.960343   73188 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:03:56.967728   73188 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.973605   73188 pod_ready.go:92] pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"True"
	I0528 22:03:56.973626   73188 pod_ready.go:81] duration metric: took 5.846822ms for pod "etcd-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.973637   73188 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.978965   73188 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"True"
	I0528 22:03:56.978991   73188 pod_ready.go:81] duration metric: took 5.346348ms for pod "kube-apiserver-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.979003   73188 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.992525   73188 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"True"
	I0528 22:03:56.992553   73188 pod_ready.go:81] duration metric: took 13.54102ms for pod "kube-controller-manager-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.992565   73188 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:56.999982   73188 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace has status "Ready":"True"
	I0528 22:03:57.000004   73188 pod_ready.go:81] duration metric: took 7.430535ms for pod "kube-scheduler-default-k8s-diff-port-249165" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:57.000012   73188 pod_ready.go:38] duration metric: took 39.659784ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:03:57.000025   73188 api_server.go:52] waiting for apiserver process to appear ...
	I0528 22:03:57.000081   73188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:03:57.005838   73188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0528 22:03:57.005866   73188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0528 22:03:57.024072   73188 api_server.go:72] duration metric: took 356.761134ms to wait for apiserver process to appear ...
	I0528 22:03:57.024093   73188 api_server.go:88] waiting for apiserver healthz status ...
	I0528 22:03:57.024110   73188 api_server.go:253] Checking apiserver healthz at https://192.168.72.48:8444/healthz ...
	I0528 22:03:57.032258   73188 api_server.go:279] https://192.168.72.48:8444/healthz returned 200:
	ok
	I0528 22:03:57.033413   73188 api_server.go:141] control plane version: v1.30.1
	I0528 22:03:57.033434   73188 api_server.go:131] duration metric: took 9.333959ms to wait for apiserver health ...
	I0528 22:03:57.033444   73188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 22:03:57.046727   73188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0528 22:03:57.046750   73188 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0528 22:03:57.105303   73188 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:03:57.105327   73188 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0528 22:03:57.123417   73188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:03:57.158565   73188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:03:57.178241   73188 system_pods.go:59] 5 kube-system pods found
	I0528 22:03:57.178282   73188 system_pods.go:61] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:57.178289   73188 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:57.178295   73188 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:57.178304   73188 system_pods.go:61] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 22:03:57.178363   73188 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:57.178378   73188 system_pods.go:74] duration metric: took 144.927386ms to wait for pod list to return data ...
	I0528 22:03:57.178389   73188 default_sa.go:34] waiting for default service account to be created ...
	I0528 22:03:57.202680   73188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:03:57.370886   73188 default_sa.go:45] found service account: "default"
	I0528 22:03:57.370917   73188 default_sa.go:55] duration metric: took 192.512428ms for default service account to be created ...
	I0528 22:03:57.370928   73188 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 22:03:57.627455   73188 system_pods.go:86] 7 kube-system pods found
	I0528 22:03:57.627489   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Pending
	I0528 22:03:57.627497   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Pending
	I0528 22:03:57.627504   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:57.627511   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:57.627518   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:57.627528   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 22:03:57.627535   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:57.627559   73188 retry.go:31] will retry after 254.633885ms: missing components: kube-dns, kube-proxy
	I0528 22:03:57.888116   73188 system_pods.go:86] 7 kube-system pods found
	I0528 22:03:57.888151   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:57.888163   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:57.888170   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:57.888178   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:57.888184   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:57.888194   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 22:03:57.888201   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:57.888223   73188 retry.go:31] will retry after 268.738305ms: missing components: kube-dns, kube-proxy
	I0528 22:03:58.043325   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.043356   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.043650   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.043674   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.043693   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.043707   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.043949   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Closing plugin on server side
	I0528 22:03:58.044008   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.044028   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.049206   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.049225   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.049473   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Closing plugin on server side
	I0528 22:03:58.049518   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.049528   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.049540   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.049550   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.049785   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.049801   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.065546   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.065567   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.065857   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Closing plugin on server side
	I0528 22:03:58.065884   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.065898   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.169017   73188 system_pods.go:86] 8 kube-system pods found
	I0528 22:03:58.169047   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:58.169054   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:58.169062   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:58.169070   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:58.169077   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:58.169085   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 22:03:58.169091   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:58.169101   73188 system_pods.go:89] "storage-provisioner" [be3fd3ac-6795-4168-bd94-007932dcbb2c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 22:03:58.169119   73188 retry.go:31] will retry after 296.463415ms: missing components: kube-dns, kube-proxy
	I0528 22:03:58.348570   73188 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.145845195s)
	I0528 22:03:58.348628   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.348646   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.348982   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.348993   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) DBG | Closing plugin on server side
	I0528 22:03:58.349011   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.349022   73188 main.go:141] libmachine: Making call to close driver server
	I0528 22:03:58.349030   73188 main.go:141] libmachine: (default-k8s-diff-port-249165) Calling .Close
	I0528 22:03:58.349262   73188 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:03:58.349277   73188 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:03:58.349288   73188 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-249165"
	I0528 22:03:58.351022   73188 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0528 22:03:58.352295   73188 addons.go:510] duration metric: took 1.684913905s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0528 22:03:58.475437   73188 system_pods.go:86] 9 kube-system pods found
	I0528 22:03:58.475469   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:58.475477   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:58.475485   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:58.475491   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:58.475495   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:58.475500   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 22:03:58.475505   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:58.475511   73188 system_pods.go:89] "metrics-server-569cc877fc-6q6pz" [443b12f9-e99d-4bb7-ae3f-8a25ed277f44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 22:03:58.475523   73188 system_pods.go:89] "storage-provisioner" [be3fd3ac-6795-4168-bd94-007932dcbb2c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 22:03:58.475539   73188 retry.go:31] will retry after 570.589575ms: missing components: kube-dns, kube-proxy
	I0528 22:03:59.056553   73188 system_pods.go:86] 9 kube-system pods found
	I0528 22:03:59.056585   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:59.056608   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:59.056615   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:59.056621   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:59.056625   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:59.056630   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Running
	I0528 22:03:59.056635   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:59.056641   73188 system_pods.go:89] "metrics-server-569cc877fc-6q6pz" [443b12f9-e99d-4bb7-ae3f-8a25ed277f44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 22:03:59.056648   73188 system_pods.go:89] "storage-provisioner" [be3fd3ac-6795-4168-bd94-007932dcbb2c] Running
	I0528 22:03:59.056662   73188 retry.go:31] will retry after 524.559216ms: missing components: kube-dns
	I0528 22:03:59.587811   73188 system_pods.go:86] 9 kube-system pods found
	I0528 22:03:59.587841   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:59.587849   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:03:59.587856   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:03:59.587862   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:03:59.587866   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:03:59.587870   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Running
	I0528 22:03:59.587874   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:03:59.587880   73188 system_pods.go:89] "metrics-server-569cc877fc-6q6pz" [443b12f9-e99d-4bb7-ae3f-8a25ed277f44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 22:03:59.587884   73188 system_pods.go:89] "storage-provisioner" [be3fd3ac-6795-4168-bd94-007932dcbb2c] Running
	I0528 22:03:59.587897   73188 retry.go:31] will retry after 629.323845ms: missing components: kube-dns
	I0528 22:04:00.227627   73188 system_pods.go:86] 9 kube-system pods found
	I0528 22:04:00.227659   73188 system_pods.go:89] "coredns-7db6d8ff4d-9v4qf" [970de16b-4ade-4d82-8f78-fc83fc86fc8a] Running
	I0528 22:04:00.227664   73188 system_pods.go:89] "coredns-7db6d8ff4d-m7n7k" [caf303ad-139a-4b42-820e-617fa654399c] Running
	I0528 22:04:00.227669   73188 system_pods.go:89] "etcd-default-k8s-diff-port-249165" [8554d79a-3cad-4bc3-96bb-37f0084b46ce] Running
	I0528 22:04:00.227674   73188 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-249165" [289ffd6a-4f42-450a-8bcc-9769a7f233bb] Running
	I0528 22:04:00.227679   73188 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-249165" [57235e01-ba9a-45f8-bdbd-032c5a258797] Running
	I0528 22:04:00.227683   73188 system_pods.go:89] "kube-proxy-b2nd9" [df64df09-8898-44db-919c-0b1d564538ee] Running
	I0528 22:04:00.227687   73188 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-249165" [1804acb1-4682-422a-9a09-7b79589f3ae5] Running
	I0528 22:04:00.227694   73188 system_pods.go:89] "metrics-server-569cc877fc-6q6pz" [443b12f9-e99d-4bb7-ae3f-8a25ed277f44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 22:04:00.227698   73188 system_pods.go:89] "storage-provisioner" [be3fd3ac-6795-4168-bd94-007932dcbb2c] Running
	I0528 22:04:00.227709   73188 system_pods.go:126] duration metric: took 2.856773755s to wait for k8s-apps to be running ...
	I0528 22:04:00.227719   73188 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 22:04:00.227759   73188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 22:04:00.245865   73188 system_svc.go:56] duration metric: took 18.136353ms WaitForService to wait for kubelet
	I0528 22:04:00.245901   73188 kubeadm.go:576] duration metric: took 3.578592994s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 22:04:00.245927   73188 node_conditions.go:102] verifying NodePressure condition ...
	I0528 22:04:00.248867   73188 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 22:04:00.248891   73188 node_conditions.go:123] node cpu capacity is 2
	I0528 22:04:00.248906   73188 node_conditions.go:105] duration metric: took 2.971728ms to run NodePressure ...
	I0528 22:04:00.248923   73188 start.go:240] waiting for startup goroutines ...
	I0528 22:04:00.248934   73188 start.go:245] waiting for cluster config update ...
	I0528 22:04:00.248951   73188 start.go:254] writing updated cluster config ...
	I0528 22:04:00.249278   73188 ssh_runner.go:195] Run: rm -f paused
	I0528 22:04:00.297365   73188 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 22:04:00.299141   73188 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-249165" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.505544726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934135505514388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b5a69fb-15cd-4778-bac1-8b35d75a8fbd name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.506284268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e6a3a22-0ad8-471f-85d6-d7f829df4a19 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.506366341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e6a3a22-0ad8-471f-85d6-d7f829df4a19 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.506402525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5e6a3a22-0ad8-471f-85d6-d7f829df4a19 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.548127983Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ae61e1d-e106-402e-bf3a-05120714788d name=/runtime.v1.RuntimeService/Version
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.548262444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ae61e1d-e106-402e-bf3a-05120714788d name=/runtime.v1.RuntimeService/Version
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.549506042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab7f7f4e-2784-4500-be09-1a6c4b67c201 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.549901533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934135549879554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab7f7f4e-2784-4500-be09-1a6c4b67c201 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.550477725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f4abc3c-1e6e-4db7-8e8b-98c59cbbcc34 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.550530965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f4abc3c-1e6e-4db7-8e8b-98c59cbbcc34 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.550568210Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2f4abc3c-1e6e-4db7-8e8b-98c59cbbcc34 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.587731819Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03525c26-45be-4762-bf33-e2002a79342e name=/runtime.v1.RuntimeService/Version
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.587837703Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03525c26-45be-4762-bf33-e2002a79342e name=/runtime.v1.RuntimeService/Version
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.589260059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0951a6a7-1f7f-43e7-835b-e96f02330d70 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.589742433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934135589716296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0951a6a7-1f7f-43e7-835b-e96f02330d70 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.590295883Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae34eb8f-6c49-49af-83e9-b4311a3ec0fb name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.590367870Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae34eb8f-6c49-49af-83e9-b4311a3ec0fb name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.590409140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ae34eb8f-6c49-49af-83e9-b4311a3ec0fb name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.624819584Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb639586-483c-4172-9ec5-e87f0d8c02ae name=/runtime.v1.RuntimeService/Version
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.624910985Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb639586-483c-4172-9ec5-e87f0d8c02ae name=/runtime.v1.RuntimeService/Version
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.626292595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce4a63de-b94d-4dc2-84b8-8339e217b93c name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.626679441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934135626655108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce4a63de-b94d-4dc2-84b8-8339e217b93c name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.627366089Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb1930c0-9e8d-4d83-a72d-aa7a39a2426e name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.627440190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb1930c0-9e8d-4d83-a72d-aa7a39a2426e name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:08:55 old-k8s-version-499466 crio[643]: time="2024-05-28 22:08:55.627473079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bb1930c0-9e8d-4d83-a72d-aa7a39a2426e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May28 21:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.059723] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041122] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.612680] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.319990] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.591576] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.302597] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.059124] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058807] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.173273] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.170028] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.245355] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +6.602395] systemd-fstab-generator[831]: Ignoring "noauto" option for root device
	[  +0.061119] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.883111] systemd-fstab-generator[957]: Ignoring "noauto" option for root device
	[ +13.815764] kauditd_printk_skb: 46 callbacks suppressed
	[May28 21:53] systemd-fstab-generator[5029]: Ignoring "noauto" option for root device
	[May28 21:55] systemd-fstab-generator[5306]: Ignoring "noauto" option for root device
	[  +0.062272] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:08:55 up 19 min,  0 users,  load average: 0.02, 0.03, 0.04
	Linux old-k8s-version-499466 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]:         /usr/local/go/src/net/lookup.go:299 +0x685
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc000093080, 0x48ab5d6, 0x3, 0xc000bb3710, 0x24, 0x0, 0x0, 0x0, ...)
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000093080, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000bb3710, 0x24, 0x0, ...)
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]:         /usr/local/go/src/net/dial.go:221 +0x47d
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]: net.(*Dialer).DialContext(0xc000ba2120, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bb3710, 0x24, 0x0, 0x0, 0x0, ...)
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]:         /usr/local/go/src/net/dial.go:403 +0x22b
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000ba4980, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bb3710, 0x24, 0x60, 0x7f6308224490, 0x118, ...)
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]: net/http.(*Transport).dial(0xc00015cb40, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bb3710, 0x24, 0x0, 0x0, 0x0, ...)
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]: net/http.(*Transport).dialConn(0xc00015cb40, 0x4f7fe00, 0xc000120018, 0x0, 0xc0003b8480, 0x5, 0xc000bb3710, 0x24, 0x0, 0xc000a707e0, ...)
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]: net/http.(*Transport).dialConnFor(0xc00015cb40, 0xc000af3810)
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]: created by net/http.(*Transport).queueForDial
	May 28 22:08:54 old-k8s-version-499466 kubelet[6817]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	May 28 22:08:55 old-k8s-version-499466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 140.
	May 28 22:08:55 old-k8s-version-499466 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	May 28 22:08:55 old-k8s-version-499466 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	May 28 22:08:55 old-k8s-version-499466 kubelet[6863]: I0528 22:08:55.550066    6863 server.go:416] Version: v1.20.0
	May 28 22:08:55 old-k8s-version-499466 kubelet[6863]: I0528 22:08:55.550425    6863 server.go:837] Client rotation is on, will bootstrap in background
	May 28 22:08:55 old-k8s-version-499466 kubelet[6863]: I0528 22:08:55.553913    6863 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	May 28 22:08:55 old-k8s-version-499466 kubelet[6863]: I0528 22:08:55.555797    6863 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	May 28 22:08:55 old-k8s-version-499466 kubelet[6863]: W0528 22:08:55.556291    6863 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-499466 -n old-k8s-version-499466
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-499466 -n old-k8s-version-499466: exit status 2 (227.609467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-499466" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (147.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (238.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0528 22:13:07.823830   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/client.crt: no such file or directory
E0528 22:13:20.453030   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 22:13:48.784681   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/client.crt: no such file or directory
E0528 22:14:04.181887   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 22:14:06.435396   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/no-preload-290122/client.crt: no such file or directory
E0528 22:14:25.051529   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 22:14:25.648214   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 22:14:32.640976   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 22:14:39.175900   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 22:14:42.598215   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 22:14:45.763396   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory
E0528 22:14:58.382826   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 22:15:10.705573   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/old-k8s-version-499466/client.crt: no such file or directory
E0528 22:16:22.593471   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/no-preload-290122/client.crt: no such file or directory
E0528 22:16:23.499529   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 22:16:36.131070   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 22:16:50.276118   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/no-preload-290122/client.crt: no such file or directory
E0528 22:16:55.337282   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-249165 -n default-k8s-diff-port-249165
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-05-28 22:16:59.36716603 +0000 UTC m=+6950.123244642
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-249165 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-249165 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.217µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-249165 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-249165 -n default-k8s-diff-port-249165
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-249165 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-249165 logs -n 25: (1.22180008s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:54 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 21:44 UTC | 28 May 24 21:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-499466             | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC | 28 May 24 21:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 21:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-257793                              | cert-expiration-257793       | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	| delete  | -p                                                     | disable-driver-mounts-807140 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:46 UTC |
	|         | disable-driver-mounts-807140                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:46 UTC | 28 May 24 21:50 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-249165  | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC | 28 May 24 21:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:51 UTC |                     |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-249165       | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-249165 | jenkins | v1.33.1 | 28 May 24 21:53 UTC | 28 May 24 22:04 UTC |
	|         | default-k8s-diff-port-249165                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-499466                              | old-k8s-version-499466       | jenkins | v1.33.1 | 28 May 24 22:08 UTC | 28 May 24 22:08 UTC |
	| start   | -p newest-cni-588598 --memory=2200 --alsologtostderr   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:08 UTC | 28 May 24 22:09 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-290122                                   | no-preload-290122            | jenkins | v1.33.1 | 28 May 24 22:09 UTC | 28 May 24 22:09 UTC |
	| addons  | enable metrics-server -p newest-cni-588598             | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:09 UTC | 28 May 24 22:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-588598                                   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:09 UTC | 28 May 24 22:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-595279                                  | embed-certs-595279           | jenkins | v1.33.1 | 28 May 24 22:09 UTC | 28 May 24 22:09 UTC |
	| addons  | enable dashboard -p newest-cni-588598                  | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-588598 --memory=2200 --alsologtostderr   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-588598 image list                           | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-588598                                   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-588598                                   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-588598                                   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	| delete  | -p newest-cni-588598                                   | newest-cni-588598            | jenkins | v1.33.1 | 28 May 24 22:10 UTC | 28 May 24 22:10 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 22:10:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 22:10:03.487472   78166 out.go:291] Setting OutFile to fd 1 ...
	I0528 22:10:03.487717   78166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:10:03.487726   78166 out.go:304] Setting ErrFile to fd 2...
	I0528 22:10:03.487730   78166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:10:03.487900   78166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 22:10:03.488383   78166 out.go:298] Setting JSON to false
	I0528 22:10:03.489199   78166 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6746,"bootTime":1716927457,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 22:10:03.489254   78166 start.go:139] virtualization: kvm guest
	I0528 22:10:03.491506   78166 out.go:177] * [newest-cni-588598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 22:10:03.492798   78166 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 22:10:03.492799   78166 notify.go:220] Checking for updates...
	I0528 22:10:03.494011   78166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 22:10:03.495913   78166 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 22:10:03.497297   78166 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 22:10:03.498518   78166 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 22:10:03.499871   78166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 22:10:03.501242   78166 config.go:182] Loaded profile config "newest-cni-588598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:10:03.501626   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:03.501690   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:03.516147   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0528 22:10:03.516483   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:03.516961   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:03.516982   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:03.517285   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:03.517476   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:03.517742   78166 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 22:10:03.518083   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:03.518118   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:03.532156   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0528 22:10:03.532488   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:03.532895   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:03.532913   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:03.533318   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:03.533545   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:03.567889   78166 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 22:10:03.569220   78166 start.go:297] selected driver: kvm2
	I0528 22:10:03.569233   78166 start.go:901] validating driver "kvm2" against &{Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:10:03.569340   78166 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 22:10:03.570282   78166 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:10:03.570362   78166 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 22:10:03.584694   78166 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 22:10:03.585222   78166 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0528 22:10:03.585296   78166 cni.go:84] Creating CNI manager for ""
	I0528 22:10:03.585313   78166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 22:10:03.585368   78166 start.go:340] cluster config:
	{Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-588598 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:10:03.585538   78166 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:10:03.587612   78166 out.go:177] * Starting "newest-cni-588598" primary control-plane node in "newest-cni-588598" cluster
	I0528 22:10:03.588794   78166 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 22:10:03.588824   78166 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 22:10:03.588834   78166 cache.go:56] Caching tarball of preloaded images
	I0528 22:10:03.588900   78166 preload.go:173] Found /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0528 22:10:03.588910   78166 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0528 22:10:03.589003   78166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/config.json ...
	I0528 22:10:03.589179   78166 start.go:360] acquireMachinesLock for newest-cni-588598: {Name:mk6fb3103b370c7f3d9923fd9de7afe2c77239d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0528 22:10:03.589237   78166 start.go:364] duration metric: took 30.18µs to acquireMachinesLock for "newest-cni-588598"
	I0528 22:10:03.589256   78166 start.go:96] Skipping create...Using existing machine configuration
	I0528 22:10:03.589266   78166 fix.go:54] fixHost starting: 
	I0528 22:10:03.589606   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:03.589639   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:03.603301   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44899
	I0528 22:10:03.603685   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:03.604116   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:03.604144   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:03.604536   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:03.604742   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:03.604891   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:10:03.606735   78166 fix.go:112] recreateIfNeeded on newest-cni-588598: state=Stopped err=<nil>
	I0528 22:10:03.606756   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	W0528 22:10:03.606901   78166 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 22:10:03.608698   78166 out.go:177] * Restarting existing kvm2 VM for "newest-cni-588598" ...
	I0528 22:10:03.609810   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Start
	I0528 22:10:03.609957   78166 main.go:141] libmachine: (newest-cni-588598) Ensuring networks are active...
	I0528 22:10:03.610705   78166 main.go:141] libmachine: (newest-cni-588598) Ensuring network default is active
	I0528 22:10:03.611013   78166 main.go:141] libmachine: (newest-cni-588598) Ensuring network mk-newest-cni-588598 is active
	I0528 22:10:03.611420   78166 main.go:141] libmachine: (newest-cni-588598) Getting domain xml...
	I0528 22:10:03.612186   78166 main.go:141] libmachine: (newest-cni-588598) Creating domain...
	I0528 22:10:04.803094   78166 main.go:141] libmachine: (newest-cni-588598) Waiting to get IP...
	I0528 22:10:04.803873   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:04.804234   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:04.804313   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:04.804207   78201 retry.go:31] will retry after 257.984747ms: waiting for machine to come up
	I0528 22:10:05.063999   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:05.064497   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:05.064525   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:05.064456   78201 retry.go:31] will retry after 246.19476ms: waiting for machine to come up
	I0528 22:10:05.311911   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:05.312392   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:05.312416   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:05.312340   78201 retry.go:31] will retry after 335.114844ms: waiting for machine to come up
	I0528 22:10:05.648649   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:05.649131   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:05.649161   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:05.649082   78201 retry.go:31] will retry after 440.66407ms: waiting for machine to come up
	I0528 22:10:06.091690   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:06.092113   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:06.092143   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:06.092079   78201 retry.go:31] will retry after 596.385085ms: waiting for machine to come up
	I0528 22:10:06.689941   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:06.690445   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:06.690478   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:06.690405   78201 retry.go:31] will retry after 690.571827ms: waiting for machine to come up
	I0528 22:10:07.382296   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:07.382706   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:07.382731   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:07.382645   78201 retry.go:31] will retry after 886.933473ms: waiting for machine to come up
	I0528 22:10:08.270613   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:08.270993   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:08.271022   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:08.270962   78201 retry.go:31] will retry after 917.957007ms: waiting for machine to come up
	I0528 22:10:09.190755   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:09.191249   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:09.191278   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:09.191194   78201 retry.go:31] will retry after 1.636471321s: waiting for machine to come up
	I0528 22:10:10.829472   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:10.829998   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:10.830024   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:10.829930   78201 retry.go:31] will retry after 1.594778354s: waiting for machine to come up
	I0528 22:10:12.426743   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:12.427199   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:12.427230   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:12.427132   78201 retry.go:31] will retry after 2.561893178s: waiting for machine to come up
	I0528 22:10:14.990660   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:14.991079   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:14.991107   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:14.991029   78201 retry.go:31] will retry after 2.20210997s: waiting for machine to come up
	I0528 22:10:17.196545   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:17.196881   78166 main.go:141] libmachine: (newest-cni-588598) DBG | unable to find current IP address of domain newest-cni-588598 in network mk-newest-cni-588598
	I0528 22:10:17.196913   78166 main.go:141] libmachine: (newest-cni-588598) DBG | I0528 22:10:17.196844   78201 retry.go:31] will retry after 3.778097083s: waiting for machine to come up
	I0528 22:10:20.977593   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:20.978352   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has current primary IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:20.978382   78166 main.go:141] libmachine: (newest-cni-588598) Found IP for machine: 192.168.39.57
	I0528 22:10:20.978394   78166 main.go:141] libmachine: (newest-cni-588598) Reserving static IP address...
	I0528 22:10:20.978804   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "newest-cni-588598", mac: "52:54:00:a4:df:c4", ip: "192.168.39.57"} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:20.978828   78166 main.go:141] libmachine: (newest-cni-588598) Reserved static IP address: 192.168.39.57
	I0528 22:10:20.978840   78166 main.go:141] libmachine: (newest-cni-588598) DBG | skip adding static IP to network mk-newest-cni-588598 - found existing host DHCP lease matching {name: "newest-cni-588598", mac: "52:54:00:a4:df:c4", ip: "192.168.39.57"}
	I0528 22:10:20.978854   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Getting to WaitForSSH function...
	I0528 22:10:20.978864   78166 main.go:141] libmachine: (newest-cni-588598) Waiting for SSH to be available...
	I0528 22:10:20.980785   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:20.981077   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:20.981113   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:20.981274   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Using SSH client type: external
	I0528 22:10:20.981301   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Using SSH private key: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa (-rw-------)
	I0528 22:10:20.981332   78166 main.go:141] libmachine: (newest-cni-588598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0528 22:10:20.981351   78166 main.go:141] libmachine: (newest-cni-588598) DBG | About to run SSH command:
	I0528 22:10:20.981363   78166 main.go:141] libmachine: (newest-cni-588598) DBG | exit 0
	I0528 22:10:21.101483   78166 main.go:141] libmachine: (newest-cni-588598) DBG | SSH cmd err, output: <nil>: 
	I0528 22:10:21.101898   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetConfigRaw
	I0528 22:10:21.102438   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:10:21.104898   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.105261   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.105295   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.105499   78166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/config.json ...
	I0528 22:10:21.105674   78166 machine.go:94] provisionDockerMachine start ...
	I0528 22:10:21.105691   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:21.105926   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:21.107989   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.108270   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.108289   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.108397   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:21.108557   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.108712   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.108837   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:21.108994   78166 main.go:141] libmachine: Using SSH client type: native
	I0528 22:10:21.109230   78166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:10:21.109246   78166 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 22:10:21.210053   78166 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0528 22:10:21.210092   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:10:21.210344   78166 buildroot.go:166] provisioning hostname "newest-cni-588598"
	I0528 22:10:21.210366   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:10:21.210559   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:21.213067   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.213381   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.213412   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.213491   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:21.213648   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.213804   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.213963   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:21.214112   78166 main.go:141] libmachine: Using SSH client type: native
	I0528 22:10:21.214271   78166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:10:21.214282   78166 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-588598 && echo "newest-cni-588598" | sudo tee /etc/hostname
	I0528 22:10:21.334983   78166 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-588598
	
	I0528 22:10:21.335018   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:21.337716   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.338073   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.338112   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.338238   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:21.338435   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.338607   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.338736   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:21.338884   78166 main.go:141] libmachine: Using SSH client type: native
	I0528 22:10:21.339078   78166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:10:21.339102   78166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-588598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-588598/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-588598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 22:10:21.446582   78166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 22:10:21.446608   78166 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18966-3963/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-3963/.minikube}
	I0528 22:10:21.446629   78166 buildroot.go:174] setting up certificates
	I0528 22:10:21.446640   78166 provision.go:84] configureAuth start
	I0528 22:10:21.446651   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetMachineName
	I0528 22:10:21.446906   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:10:21.449345   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.449708   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.449740   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.449912   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:21.451869   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.452097   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.452116   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.452264   78166 provision.go:143] copyHostCerts
	I0528 22:10:21.452336   78166 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem, removing ...
	I0528 22:10:21.452355   78166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem
	I0528 22:10:21.452422   78166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/ca.pem (1078 bytes)
	I0528 22:10:21.452506   78166 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem, removing ...
	I0528 22:10:21.452514   78166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem
	I0528 22:10:21.452538   78166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/cert.pem (1123 bytes)
	I0528 22:10:21.452586   78166 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem, removing ...
	I0528 22:10:21.452593   78166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem
	I0528 22:10:21.452612   78166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-3963/.minikube/key.pem (1675 bytes)
	I0528 22:10:21.452660   78166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem org=jenkins.newest-cni-588598 san=[127.0.0.1 192.168.39.57 localhost minikube newest-cni-588598]
	I0528 22:10:21.689350   78166 provision.go:177] copyRemoteCerts
	I0528 22:10:21.689399   78166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 22:10:21.689425   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:21.692062   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.692596   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.692627   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.692877   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:21.693071   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.693226   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:21.693398   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:21.776437   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 22:10:21.804184   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0528 22:10:21.831299   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 22:10:21.858000   78166 provision.go:87] duration metric: took 411.350402ms to configureAuth
	I0528 22:10:21.858022   78166 buildroot.go:189] setting minikube options for container-runtime
	I0528 22:10:21.858216   78166 config.go:182] Loaded profile config "newest-cni-588598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:10:21.858310   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:21.860992   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.861399   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:21.861418   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:21.861716   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:21.861930   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.862076   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:21.862194   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:21.862377   78166 main.go:141] libmachine: Using SSH client type: native
	I0528 22:10:21.862595   78166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:10:21.862617   78166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0528 22:10:22.133213   78166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0528 22:10:22.133239   78166 machine.go:97] duration metric: took 1.027552944s to provisionDockerMachine
	I0528 22:10:22.133249   78166 start.go:293] postStartSetup for "newest-cni-588598" (driver="kvm2")
	I0528 22:10:22.133273   78166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 22:10:22.133288   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:22.133619   78166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 22:10:22.133666   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:22.136533   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.136905   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:22.136943   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.137186   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:22.137415   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:22.137603   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:22.137743   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:22.216515   78166 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 22:10:22.220904   78166 info.go:137] Remote host: Buildroot 2023.02.9
	I0528 22:10:22.220939   78166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/addons for local assets ...
	I0528 22:10:22.221009   78166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-3963/.minikube/files for local assets ...
	I0528 22:10:22.221098   78166 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem -> 117602.pem in /etc/ssl/certs
	I0528 22:10:22.221207   78166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 22:10:22.230605   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /etc/ssl/certs/117602.pem (1708 bytes)
	I0528 22:10:22.256224   78166 start.go:296] duration metric: took 122.964127ms for postStartSetup
	I0528 22:10:22.256262   78166 fix.go:56] duration metric: took 18.666995938s for fixHost
	I0528 22:10:22.256300   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:22.259322   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.259694   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:22.259724   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.259884   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:22.260085   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:22.260257   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:22.260408   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:22.260601   78166 main.go:141] libmachine: Using SSH client type: native
	I0528 22:10:22.260758   78166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0528 22:10:22.260767   78166 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0528 22:10:22.366264   78166 main.go:141] libmachine: SSH cmd err, output: <nil>: 1716934222.338737256
	
	I0528 22:10:22.366281   78166 fix.go:216] guest clock: 1716934222.338737256
	I0528 22:10:22.366287   78166 fix.go:229] Guest: 2024-05-28 22:10:22.338737256 +0000 UTC Remote: 2024-05-28 22:10:22.256266989 +0000 UTC m=+18.801025807 (delta=82.470267ms)
	I0528 22:10:22.366329   78166 fix.go:200] guest clock delta is within tolerance: 82.470267ms
	I0528 22:10:22.366336   78166 start.go:83] releasing machines lock for "newest-cni-588598", held for 18.777087397s
	I0528 22:10:22.366355   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:22.366636   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:10:22.369373   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.369680   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:22.369707   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.369827   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:22.370296   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:22.370462   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:22.370573   78166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 22:10:22.370619   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:22.370718   78166 ssh_runner.go:195] Run: cat /version.json
	I0528 22:10:22.370743   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:22.373212   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.373518   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:22.373544   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.373576   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.373715   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:22.373896   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:22.374075   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:22.374076   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:22.374124   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:22.374196   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:22.374331   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:22.374367   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:22.374466   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:22.374594   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:22.472175   78166 ssh_runner.go:195] Run: systemctl --version
	I0528 22:10:22.478366   78166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0528 22:10:22.629498   78166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0528 22:10:22.636015   78166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0528 22:10:22.636090   78166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 22:10:22.652653   78166 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 22:10:22.652672   78166 start.go:494] detecting cgroup driver to use...
	I0528 22:10:22.652718   78166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0528 22:10:22.671583   78166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0528 22:10:22.687134   78166 docker.go:217] disabling cri-docker service (if available) ...
	I0528 22:10:22.687216   78166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0528 22:10:22.701618   78166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0528 22:10:22.714931   78166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0528 22:10:22.829917   78166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0528 22:10:22.984305   78166 docker.go:233] disabling docker service ...
	I0528 22:10:22.984408   78166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0528 22:10:22.998601   78166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0528 22:10:23.011502   78166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0528 22:10:23.146935   78166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0528 22:10:23.254677   78166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0528 22:10:23.268481   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 22:10:23.286930   78166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0528 22:10:23.287000   78166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.296967   78166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0528 22:10:23.297023   78166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.307277   78166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.317449   78166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.327620   78166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 22:10:23.337927   78166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.347809   78166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.364698   78166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0528 22:10:23.374698   78166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 22:10:23.384139   78166 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0528 22:10:23.384199   78166 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0528 22:10:23.397676   78166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 22:10:23.407326   78166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:10:23.525666   78166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0528 22:10:23.666020   78166 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0528 22:10:23.666086   78166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0528 22:10:23.671601   78166 start.go:562] Will wait 60s for crictl version
	I0528 22:10:23.671681   78166 ssh_runner.go:195] Run: which crictl
	I0528 22:10:23.675592   78166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 22:10:23.720429   78166 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0528 22:10:23.720561   78166 ssh_runner.go:195] Run: crio --version
	I0528 22:10:23.747317   78166 ssh_runner.go:195] Run: crio --version
	I0528 22:10:23.775385   78166 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0528 22:10:23.776563   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetIP
	I0528 22:10:23.779052   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:23.779295   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:23.779330   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:23.779539   78166 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0528 22:10:23.783666   78166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:10:23.797649   78166 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0528 22:10:23.798769   78166 kubeadm.go:877] updating cluster {Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6
m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 22:10:23.798876   78166 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 22:10:23.798924   78166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 22:10:23.833487   78166 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0528 22:10:23.833568   78166 ssh_runner.go:195] Run: which lz4
	I0528 22:10:23.837384   78166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0528 22:10:23.841397   78166 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0528 22:10:23.841426   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0528 22:10:25.245870   78166 crio.go:462] duration metric: took 1.408510459s to copy over tarball
	I0528 22:10:25.245951   78166 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0528 22:10:27.437359   78166 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191373201s)
	I0528 22:10:27.437397   78166 crio.go:469] duration metric: took 2.191502921s to extract the tarball
	I0528 22:10:27.437406   78166 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0528 22:10:27.477666   78166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0528 22:10:27.519220   78166 crio.go:514] all images are preloaded for cri-o runtime.
	I0528 22:10:27.519242   78166 cache_images.go:84] Images are preloaded, skipping loading
	I0528 22:10:27.519250   78166 kubeadm.go:928] updating node { 192.168.39.57 8443 v1.30.1 crio true true} ...
	I0528 22:10:27.519374   78166 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-588598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 22:10:27.519455   78166 ssh_runner.go:195] Run: crio config
	I0528 22:10:27.568276   78166 cni.go:84] Creating CNI manager for ""
	I0528 22:10:27.568299   78166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 22:10:27.568314   78166 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0528 22:10:27.568333   78166 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-588598 NodeName:newest-cni-588598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 22:10:27.568470   78166 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-588598"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 22:10:27.568540   78166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 22:10:27.578929   78166 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 22:10:27.578985   78166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 22:10:27.589501   78166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0528 22:10:27.608053   78166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 22:10:27.624686   78166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0528 22:10:27.642539   78166 ssh_runner.go:195] Run: grep 192.168.39.57	control-plane.minikube.internal$ /etc/hosts
	I0528 22:10:27.646439   78166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.57	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:10:27.659110   78166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:10:27.793923   78166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:10:27.812426   78166 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598 for IP: 192.168.39.57
	I0528 22:10:27.812454   78166 certs.go:194] generating shared ca certs ...
	I0528 22:10:27.812477   78166 certs.go:226] acquiring lock for ca certs: {Name:mkf081a2e878fd451cfbdf01dc28a5f7a6a3abc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:10:27.812668   78166 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key
	I0528 22:10:27.812731   78166 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key
	I0528 22:10:27.812744   78166 certs.go:256] generating profile certs ...
	I0528 22:10:27.812872   78166 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/client.key
	I0528 22:10:27.812971   78166 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.key.3d9132ba
	I0528 22:10:27.813030   78166 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.key
	I0528 22:10:27.813195   78166 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem (1338 bytes)
	W0528 22:10:27.813245   78166 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760_empty.pem, impossibly tiny 0 bytes
	I0528 22:10:27.813263   78166 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 22:10:27.813295   78166 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/ca.pem (1078 bytes)
	I0528 22:10:27.813325   78166 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/cert.pem (1123 bytes)
	I0528 22:10:27.813354   78166 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/certs/key.pem (1675 bytes)
	I0528 22:10:27.813424   78166 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem (1708 bytes)
	I0528 22:10:27.814983   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 22:10:27.844995   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 22:10:27.883085   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 22:10:27.920052   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 22:10:27.948786   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 22:10:27.975806   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 22:10:28.005583   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 22:10:28.030585   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/newest-cni-588598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 22:10:28.056770   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/certs/11760.pem --> /usr/share/ca-certificates/11760.pem (1338 bytes)
	I0528 22:10:28.082575   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/ssl/certs/117602.pem --> /usr/share/ca-certificates/117602.pem (1708 bytes)
	I0528 22:10:28.107581   78166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 22:10:28.132689   78166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 22:10:28.150296   78166 ssh_runner.go:195] Run: openssl version
	I0528 22:10:28.156546   78166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11760.pem && ln -fs /usr/share/ca-certificates/11760.pem /etc/ssl/certs/11760.pem"
	I0528 22:10:28.167235   78166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11760.pem
	I0528 22:10:28.171747   78166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 20:34 /usr/share/ca-certificates/11760.pem
	I0528 22:10:28.171795   78166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11760.pem
	I0528 22:10:28.177719   78166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11760.pem /etc/ssl/certs/51391683.0"
	I0528 22:10:28.188095   78166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117602.pem && ln -fs /usr/share/ca-certificates/117602.pem /etc/ssl/certs/117602.pem"
	I0528 22:10:28.198282   78166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117602.pem
	I0528 22:10:28.202886   78166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 20:34 /usr/share/ca-certificates/117602.pem
	I0528 22:10:28.202935   78166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117602.pem
	I0528 22:10:28.208624   78166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117602.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 22:10:28.218860   78166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 22:10:28.229289   78166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:10:28.233855   78166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:22 /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:10:28.233908   78166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:10:28.239707   78166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 22:10:28.250693   78166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 22:10:28.255585   78166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 22:10:28.262175   78166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 22:10:28.268550   78166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 22:10:28.275531   78166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 22:10:28.282766   78166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 22:10:28.289007   78166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 22:10:28.295343   78166 kubeadm.go:391] StartCluster: {Name:newest-cni-588598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.1 ClusterName:newest-cni-588598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:10:28.295482   78166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0528 22:10:28.295536   78166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 22:10:28.338152   78166 cri.go:89] found id: ""
	I0528 22:10:28.338229   78166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0528 22:10:28.349119   78166 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 22:10:28.349140   78166 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 22:10:28.349144   78166 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 22:10:28.349187   78166 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 22:10:28.359484   78166 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 22:10:28.360054   78166 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-588598" does not appear in /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 22:10:28.360288   78166 kubeconfig.go:62] /home/jenkins/minikube-integration/18966-3963/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-588598" cluster setting kubeconfig missing "newest-cni-588598" context setting]
	I0528 22:10:28.360702   78166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:10:28.361961   78166 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 22:10:28.371797   78166 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.57
	I0528 22:10:28.371823   78166 kubeadm.go:1154] stopping kube-system containers ...
	I0528 22:10:28.371832   78166 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0528 22:10:28.371876   78166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0528 22:10:28.412240   78166 cri.go:89] found id: ""
	I0528 22:10:28.412312   78166 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 22:10:28.429416   78166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 22:10:28.439549   78166 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 22:10:28.439577   78166 kubeadm.go:156] found existing configuration files:
	
	I0528 22:10:28.439625   78166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 22:10:28.448717   78166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 22:10:28.448776   78166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 22:10:28.458405   78166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 22:10:28.467602   78166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 22:10:28.467665   78166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 22:10:28.477711   78166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 22:10:28.486931   78166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 22:10:28.487016   78166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 22:10:28.497193   78166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 22:10:28.506637   78166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 22:10:28.506693   78166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 22:10:28.516299   78166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 22:10:28.526159   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 22:10:28.644338   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 22:10:29.980562   78166 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.336189588s)
	I0528 22:10:29.980590   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 22:10:30.192135   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 22:10:30.264552   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 22:10:30.356685   78166 api_server.go:52] waiting for apiserver process to appear ...
	I0528 22:10:30.356780   78166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:10:30.857038   78166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:10:31.357524   78166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:10:31.422626   78166 api_server.go:72] duration metric: took 1.065942329s to wait for apiserver process to appear ...
	I0528 22:10:31.422654   78166 api_server.go:88] waiting for apiserver healthz status ...
	I0528 22:10:31.422676   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:33.761313   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 22:10:33.761353   78166 api_server.go:103] status: https://192.168.39.57:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 22:10:33.761371   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:33.805328   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 22:10:33.805366   78166 api_server.go:103] status: https://192.168.39.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 22:10:33.923552   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:33.933064   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 22:10:33.933088   78166 api_server.go:103] status: https://192.168.39.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 22:10:34.423714   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:34.445971   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 22:10:34.445997   78166 api_server.go:103] status: https://192.168.39.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 22:10:34.923401   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:34.938319   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 22:10:34.938353   78166 api_server.go:103] status: https://192.168.39.57:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 22:10:35.422865   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:35.427013   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I0528 22:10:35.434109   78166 api_server.go:141] control plane version: v1.30.1
	I0528 22:10:35.434131   78166 api_server.go:131] duration metric: took 4.011469454s to wait for apiserver health ...
	I0528 22:10:35.434139   78166 cni.go:84] Creating CNI manager for ""
	I0528 22:10:35.434144   78166 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 22:10:35.436088   78166 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 22:10:35.437273   78166 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 22:10:35.456009   78166 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 22:10:35.487261   78166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 22:10:35.497647   78166 system_pods.go:59] 8 kube-system pods found
	I0528 22:10:35.497693   78166 system_pods.go:61] "coredns-7db6d8ff4d-wk5f4" [9dcd7b17-fc19-4468-b8f9-76a2fb7f1ec9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:10:35.497706   78166 system_pods.go:61] "etcd-newest-cni-588598" [785dbf00-a5a6-4946-8a36-6200a875dbcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 22:10:35.497721   78166 system_pods.go:61] "kube-apiserver-newest-cni-588598" [c9b79154-b6b7-494e-92b1-c447580db787] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 22:10:35.497731   78166 system_pods.go:61] "kube-controller-manager-newest-cni-588598" [f14bfaa9-0a88-4c01-9065-765797138f5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 22:10:35.497741   78166 system_pods.go:61] "kube-proxy-8jgfw" [8125c94f-11df-4eee-8612-9546dc054146] Running
	I0528 22:10:35.497749   78166 system_pods.go:61] "kube-scheduler-newest-cni-588598" [3e3160b5-e111-4a5e-9082-c9ae2a6633c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 22:10:35.497772   78166 system_pods.go:61] "metrics-server-569cc877fc-zhskl" [af95aae0-a143-4c72-a193-3a097270666a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 22:10:35.497783   78166 system_pods.go:61] "storage-provisioner" [9993a26e-0e7d-45d6-ac6f-3672e3390ba5] Running
	I0528 22:10:35.497791   78166 system_pods.go:74] duration metric: took 10.504284ms to wait for pod list to return data ...
	I0528 22:10:35.497799   78166 node_conditions.go:102] verifying NodePressure condition ...
	I0528 22:10:35.500864   78166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 22:10:35.500896   78166 node_conditions.go:123] node cpu capacity is 2
	I0528 22:10:35.500905   78166 node_conditions.go:105] duration metric: took 3.100481ms to run NodePressure ...
	I0528 22:10:35.500920   78166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 22:10:35.765589   78166 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 22:10:35.777929   78166 ops.go:34] apiserver oom_adj: -16
	I0528 22:10:35.777955   78166 kubeadm.go:591] duration metric: took 7.428804577s to restartPrimaryControlPlane
	I0528 22:10:35.777967   78166 kubeadm.go:393] duration metric: took 7.48263173s to StartCluster
	I0528 22:10:35.777988   78166 settings.go:142] acquiring lock: {Name:mkc1182212d780ab38539839372c25eb989b2e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:10:35.778104   78166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 22:10:35.779254   78166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/kubeconfig: {Name:mkb9ab62df00efc21274382bbe7156f03baabac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:10:35.779554   78166 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0528 22:10:35.781326   78166 out.go:177] * Verifying Kubernetes components...
	I0528 22:10:35.779655   78166 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 22:10:35.779731   78166 config.go:182] Loaded profile config "newest-cni-588598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 22:10:35.782715   78166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:10:35.782725   78166 addons.go:69] Setting default-storageclass=true in profile "newest-cni-588598"
	I0528 22:10:35.782730   78166 addons.go:69] Setting metrics-server=true in profile "newest-cni-588598"
	I0528 22:10:35.782753   78166 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-588598"
	I0528 22:10:35.782718   78166 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-588598"
	I0528 22:10:35.782822   78166 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-588598"
	W0528 22:10:35.782834   78166 addons.go:243] addon storage-provisioner should already be in state true
	I0528 22:10:35.782867   78166 host.go:66] Checking if "newest-cni-588598" exists ...
	I0528 22:10:35.782714   78166 addons.go:69] Setting dashboard=true in profile "newest-cni-588598"
	I0528 22:10:35.782966   78166 addons.go:234] Setting addon dashboard=true in "newest-cni-588598"
	W0528 22:10:35.782979   78166 addons.go:243] addon dashboard should already be in state true
	I0528 22:10:35.782755   78166 addons.go:234] Setting addon metrics-server=true in "newest-cni-588598"
	I0528 22:10:35.783012   78166 host.go:66] Checking if "newest-cni-588598" exists ...
	W0528 22:10:35.783027   78166 addons.go:243] addon metrics-server should already be in state true
	I0528 22:10:35.783066   78166 host.go:66] Checking if "newest-cni-588598" exists ...
	I0528 22:10:35.783180   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.783225   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.783249   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.783274   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.783375   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.783401   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.783500   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.783543   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.800652   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44733
	I0528 22:10:35.801119   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.801775   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.801806   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.802183   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.802780   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.802829   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.802898   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38249
	I0528 22:10:35.803058   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43811
	I0528 22:10:35.803403   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.803481   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
	I0528 22:10:35.803502   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.803811   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.803900   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.803925   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.804190   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.804208   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.804338   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.804354   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.804415   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.804530   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.804663   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.804717   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:10:35.804954   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.804991   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.805160   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.805183   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.807367   78166 addons.go:234] Setting addon default-storageclass=true in "newest-cni-588598"
	W0528 22:10:35.807385   78166 addons.go:243] addon default-storageclass should already be in state true
	I0528 22:10:35.807412   78166 host.go:66] Checking if "newest-cni-588598" exists ...
	I0528 22:10:35.807632   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.807658   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.823082   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I0528 22:10:35.823600   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.824034   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40657
	I0528 22:10:35.824163   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36739
	I0528 22:10:35.824444   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.824457   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.824520   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.824583   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.824980   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.825000   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.825044   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.825154   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.825169   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.825522   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.825696   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:10:35.825826   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.825995   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:10:35.827698   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:35.827890   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:35.827969   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43809
	I0528 22:10:35.827998   78166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 22:10:35.828023   78166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 22:10:35.829890   78166 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:10:35.828537   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.831362   78166 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0528 22:10:35.831367   78166 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:10:35.831384   78166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 22:10:35.831407   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:35.832763   78166 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0528 22:10:35.831943   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.832782   78166 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0528 22:10:35.832802   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:35.832819   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.833207   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.833359   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:10:35.835351   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.835892   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:35.835931   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:35.835970   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.836130   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:35.836291   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:35.836354   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.836379   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:35.838121   78166 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0528 22:10:35.836741   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:35.836763   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:35.837036   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:35.839385   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.840604   78166 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0528 22:10:35.839618   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:35.841806   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0528 22:10:35.841824   78166 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0528 22:10:35.841840   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:35.841995   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:35.842166   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:35.844621   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:35.844645   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.844671   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:35.844694   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.844783   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:35.844947   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:35.845102   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:35.847124   78166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39001
	I0528 22:10:35.847461   78166 main.go:141] libmachine: () Calling .GetVersion
	I0528 22:10:35.847996   78166 main.go:141] libmachine: Using API Version  1
	I0528 22:10:35.848028   78166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 22:10:35.848412   78166 main.go:141] libmachine: () Calling .GetMachineName
	I0528 22:10:35.848594   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetState
	I0528 22:10:35.849784   78166 main.go:141] libmachine: (newest-cni-588598) Calling .DriverName
	I0528 22:10:35.850006   78166 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 22:10:35.850020   78166 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 22:10:35.850036   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHHostname
	I0528 22:10:35.852949   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.853321   78166 main.go:141] libmachine: (newest-cni-588598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:df:c4", ip: ""} in network mk-newest-cni-588598: {Iface:virbr1 ExpiryTime:2024-05-28 23:10:14 +0000 UTC Type:0 Mac:52:54:00:a4:df:c4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:newest-cni-588598 Clientid:01:52:54:00:a4:df:c4}
	I0528 22:10:35.853343   78166 main.go:141] libmachine: (newest-cni-588598) DBG | domain newest-cni-588598 has defined IP address 192.168.39.57 and MAC address 52:54:00:a4:df:c4 in network mk-newest-cni-588598
	I0528 22:10:35.853564   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHPort
	I0528 22:10:35.853728   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHKeyPath
	I0528 22:10:35.853961   78166 main.go:141] libmachine: (newest-cni-588598) Calling .GetSSHUsername
	I0528 22:10:35.854070   78166 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/newest-cni-588598/id_rsa Username:docker}
	I0528 22:10:35.986115   78166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:10:36.006920   78166 api_server.go:52] waiting for apiserver process to appear ...
	I0528 22:10:36.007010   78166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:10:36.022543   78166 api_server.go:72] duration metric: took 242.943759ms to wait for apiserver process to appear ...
	I0528 22:10:36.022568   78166 api_server.go:88] waiting for apiserver healthz status ...
	I0528 22:10:36.022584   78166 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0528 22:10:36.028270   78166 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I0528 22:10:36.029416   78166 api_server.go:141] control plane version: v1.30.1
	I0528 22:10:36.029437   78166 api_server.go:131] duration metric: took 6.863133ms to wait for apiserver health ...
	I0528 22:10:36.029444   78166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 22:10:36.034807   78166 system_pods.go:59] 8 kube-system pods found
	I0528 22:10:36.034833   78166 system_pods.go:61] "coredns-7db6d8ff4d-wk5f4" [9dcd7b17-fc19-4468-b8f9-76a2fb7f1ec9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 22:10:36.034842   78166 system_pods.go:61] "etcd-newest-cni-588598" [785dbf00-a5a6-4946-8a36-6200a875dbcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 22:10:36.034851   78166 system_pods.go:61] "kube-apiserver-newest-cni-588598" [c9b79154-b6b7-494e-92b1-c447580db787] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 22:10:36.034857   78166 system_pods.go:61] "kube-controller-manager-newest-cni-588598" [f14bfaa9-0a88-4c01-9065-765797138f5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 22:10:36.034862   78166 system_pods.go:61] "kube-proxy-8jgfw" [8125c94f-11df-4eee-8612-9546dc054146] Running
	I0528 22:10:36.034867   78166 system_pods.go:61] "kube-scheduler-newest-cni-588598" [3e3160b5-e111-4a5e-9082-c9ae2a6633c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 22:10:36.034873   78166 system_pods.go:61] "metrics-server-569cc877fc-zhskl" [af95aae0-a143-4c72-a193-3a097270666a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0528 22:10:36.034877   78166 system_pods.go:61] "storage-provisioner" [9993a26e-0e7d-45d6-ac6f-3672e3390ba5] Running
	I0528 22:10:36.034882   78166 system_pods.go:74] duration metric: took 5.433272ms to wait for pod list to return data ...
	I0528 22:10:36.034891   78166 default_sa.go:34] waiting for default service account to be created ...
	I0528 22:10:36.037186   78166 default_sa.go:45] found service account: "default"
	I0528 22:10:36.037208   78166 default_sa.go:55] duration metric: took 2.311977ms for default service account to be created ...
	I0528 22:10:36.037217   78166 kubeadm.go:576] duration metric: took 257.62574ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0528 22:10:36.037231   78166 node_conditions.go:102] verifying NodePressure condition ...
	I0528 22:10:36.039286   78166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0528 22:10:36.039302   78166 node_conditions.go:123] node cpu capacity is 2
	I0528 22:10:36.039309   78166 node_conditions.go:105] duration metric: took 2.074024ms to run NodePressure ...
	I0528 22:10:36.039319   78166 start.go:240] waiting for startup goroutines ...
	I0528 22:10:36.064588   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0528 22:10:36.064618   78166 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0528 22:10:36.071458   78166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:10:36.091840   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0528 22:10:36.091875   78166 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0528 22:10:36.122954   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0528 22:10:36.122986   78166 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0528 22:10:36.151455   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0528 22:10:36.151475   78166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0528 22:10:36.167610   78166 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0528 22:10:36.167628   78166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0528 22:10:36.183705   78166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:10:36.198394   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0528 22:10:36.198426   78166 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0528 22:10:36.212483   78166 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0528 22:10:36.212502   78166 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0528 22:10:36.249558   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0528 22:10:36.249593   78166 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0528 22:10:36.251340   78166 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:10:36.251359   78166 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0528 22:10:36.302541   78166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:10:36.316104   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0528 22:10:36.316128   78166 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0528 22:10:36.343632   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0528 22:10:36.343657   78166 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0528 22:10:36.368360   78166 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:10:36.368383   78166 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0528 22:10:36.430667   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:36.430705   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:36.431035   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Closing plugin on server side
	I0528 22:10:36.431095   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:36.431113   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:36.431126   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:36.431140   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:36.431470   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Closing plugin on server side
	I0528 22:10:36.431495   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:36.431507   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:36.437934   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:36.437955   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:36.438185   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:36.438221   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:36.492982   78166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:10:37.677004   78166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.493257075s)
	I0528 22:10:37.677057   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:37.677069   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:37.677356   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:37.677376   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:37.677392   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:37.677482   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:37.677700   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:37.677723   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:37.782133   78166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.479541726s)
	I0528 22:10:37.782201   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:37.782217   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:37.782560   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Closing plugin on server side
	I0528 22:10:37.782567   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:37.782580   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:37.782590   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:37.782598   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:37.782881   78166 main.go:141] libmachine: (newest-cni-588598) DBG | Closing plugin on server side
	I0528 22:10:37.782895   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:37.782906   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:37.782919   78166 addons.go:475] Verifying addon metrics-server=true in "newest-cni-588598"
	I0528 22:10:37.847114   78166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.354088519s)
	I0528 22:10:37.847172   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:37.847186   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:37.847485   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:37.847503   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:37.847513   78166 main.go:141] libmachine: Making call to close driver server
	I0528 22:10:37.847521   78166 main.go:141] libmachine: (newest-cni-588598) Calling .Close
	I0528 22:10:37.847758   78166 main.go:141] libmachine: Successfully made call to close driver server
	I0528 22:10:37.847774   78166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0528 22:10:37.849515   78166 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-588598 addons enable metrics-server
	
	I0528 22:10:37.850980   78166 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0528 22:10:37.852515   78166 addons.go:510] duration metric: took 2.072864323s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0528 22:10:37.852553   78166 start.go:245] waiting for cluster config update ...
	I0528 22:10:37.852568   78166 start.go:254] writing updated cluster config ...
	I0528 22:10:37.852808   78166 ssh_runner.go:195] Run: rm -f paused
	I0528 22:10:37.900425   78166 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 22:10:37.902424   78166 out.go:177] * Done! kubectl is now configured to use "newest-cni-588598" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.000330045Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934620000298916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f055c2f-119f-4207-a1e9-1687077df6b8 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.000942898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb234db2-9cff-4ce9-9a9e-e33cf33e59a6 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.001035752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb234db2-9cff-4ce9-9a9e-e33cf33e59a6 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.001274036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c548f7544cbb6646beaf3e60e066fd007e36751aa6d64dc3525d2e7c313115c,PodSandboxId:2648aa7b5be82109ec33dc22d721afb5182f4314fd51e2de905ec4553b75fbdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839153737016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9v4qf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970de16b-4ade-4d82-8f78-fc83fc86fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 3e26d238,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0faefa4b1c4c329709900d137e560de3390c8153d2f934dcafed855199f336f0,PodSandboxId:8ebf0bd9db29cba925c9024a33413319840d4fa4c917e999210ed3cced56e604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839100165633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m7n7k,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: caf303ad-139a-4b42-820e-617fa654399c,},Annotations:map[string]string{io.kubernetes.container.hash: ea30a637,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec711aaa85924e7057421993cf75a2bf3b977afbbed007f940052128eb02e89,PodSandboxId:93dc7b05268240c895dc9b9c7de85b9349208e60b66b8292d6cf49c06966da6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1716933838493672775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3fd3ac-6795-4168-bd94-007932dcbb2c,},Annotations:map[string]string{io.kubernetes.container.hash: 14e90e58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bab6489dd8e95e261046821e03129e1df20511a957172d131a801383b2782c,PodSandboxId:994ab08e76d0a16f9f656192c7305743082ae5274d38c64ceee31d1490c0ae70,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1716933837933311404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df64df09-8898-44db-919c-0b1d564538ee,},Annotations:map[string]string{io.kubernetes.container.hash: fb208a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4719768083408e8d40a1b3e1767d0c299d50fc88c3d66aac9b6fd5d458ee7ac,PodSandboxId:de74adb6bb2e42045eddd14aab0a6da13119970fcec0e361690ae712e702f5f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933818509286762,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39715514d16e1aef2c805f45c43e942c,},Annotations:map[string]string{io.kubernetes.container.hash: 55d06a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30fed8617be7400eab4846e41b293ade0c1ada903f8c3e21fdcd5052ca35b41d,PodSandboxId:5819f408569516337af99087fe96a2a11a1dec54cb0fccef7a2ecc34c8394c34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933818541679246,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb34c519fc34f94122ba139e98e7226a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7456087993ce43df9e33b9136199c345eb75a44dd4be3c9eae3d8378076c3cfb,PodSandboxId:eaf3652a3a78ac206674ff795df24a67155bcb3220adf5b257f77b1588fd29dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933818451356924,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b574d3676ce396f415ec6bdfd52e3c,},Annotations:map[string]string{io.kubernetes.container.hash: c6aa01a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0d2ea508b9e928e0a175ee08f28acb732359d0cb411c0baaa7c36b7a9913bb,PodSandboxId:7c43d8161def62f299845abf9bc11d8c831b1ecb31982eb8a4dd37d9caeec00a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933818414528223,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b6925a1cfa430048d5fd4482f4cbc,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb234db2-9cff-4ce9-9a9e-e33cf33e59a6 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.049406257Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75167523-d019-4a5c-a2fd-66b07eca0058 name=/runtime.v1.RuntimeService/Version
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.049768889Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75167523-d019-4a5c-a2fd-66b07eca0058 name=/runtime.v1.RuntimeService/Version
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.051090292Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=355f3407-87e1-488d-b78c-4ed85cb85b3b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.051995848Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934620051965468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=355f3407-87e1-488d-b78c-4ed85cb85b3b name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.052808348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f23ea96-e4f4-460b-aaa0-bd82f72b36f1 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.052904724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f23ea96-e4f4-460b-aaa0-bd82f72b36f1 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.053380368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c548f7544cbb6646beaf3e60e066fd007e36751aa6d64dc3525d2e7c313115c,PodSandboxId:2648aa7b5be82109ec33dc22d721afb5182f4314fd51e2de905ec4553b75fbdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839153737016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9v4qf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970de16b-4ade-4d82-8f78-fc83fc86fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 3e26d238,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0faefa4b1c4c329709900d137e560de3390c8153d2f934dcafed855199f336f0,PodSandboxId:8ebf0bd9db29cba925c9024a33413319840d4fa4c917e999210ed3cced56e604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839100165633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m7n7k,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: caf303ad-139a-4b42-820e-617fa654399c,},Annotations:map[string]string{io.kubernetes.container.hash: ea30a637,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec711aaa85924e7057421993cf75a2bf3b977afbbed007f940052128eb02e89,PodSandboxId:93dc7b05268240c895dc9b9c7de85b9349208e60b66b8292d6cf49c06966da6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1716933838493672775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3fd3ac-6795-4168-bd94-007932dcbb2c,},Annotations:map[string]string{io.kubernetes.container.hash: 14e90e58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bab6489dd8e95e261046821e03129e1df20511a957172d131a801383b2782c,PodSandboxId:994ab08e76d0a16f9f656192c7305743082ae5274d38c64ceee31d1490c0ae70,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1716933837933311404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df64df09-8898-44db-919c-0b1d564538ee,},Annotations:map[string]string{io.kubernetes.container.hash: fb208a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4719768083408e8d40a1b3e1767d0c299d50fc88c3d66aac9b6fd5d458ee7ac,PodSandboxId:de74adb6bb2e42045eddd14aab0a6da13119970fcec0e361690ae712e702f5f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933818509286762,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39715514d16e1aef2c805f45c43e942c,},Annotations:map[string]string{io.kubernetes.container.hash: 55d06a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30fed8617be7400eab4846e41b293ade0c1ada903f8c3e21fdcd5052ca35b41d,PodSandboxId:5819f408569516337af99087fe96a2a11a1dec54cb0fccef7a2ecc34c8394c34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933818541679246,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb34c519fc34f94122ba139e98e7226a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7456087993ce43df9e33b9136199c345eb75a44dd4be3c9eae3d8378076c3cfb,PodSandboxId:eaf3652a3a78ac206674ff795df24a67155bcb3220adf5b257f77b1588fd29dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933818451356924,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b574d3676ce396f415ec6bdfd52e3c,},Annotations:map[string]string{io.kubernetes.container.hash: c6aa01a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0d2ea508b9e928e0a175ee08f28acb732359d0cb411c0baaa7c36b7a9913bb,PodSandboxId:7c43d8161def62f299845abf9bc11d8c831b1ecb31982eb8a4dd37d9caeec00a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933818414528223,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b6925a1cfa430048d5fd4482f4cbc,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f23ea96-e4f4-460b-aaa0-bd82f72b36f1 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.102317787Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2054dcce-aa52-4f56-92b3-bebdb0d95a24 name=/runtime.v1.RuntimeService/Version
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.102577646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2054dcce-aa52-4f56-92b3-bebdb0d95a24 name=/runtime.v1.RuntimeService/Version
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.103267033Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b8b6e5a-31fb-423d-bdea-dd5c926359d5 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.103842434Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934620103819310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b8b6e5a-31fb-423d-bdea-dd5c926359d5 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.104241087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e9cc807-3e65-4888-8084-f9b325a002d2 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.104287911Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e9cc807-3e65-4888-8084-f9b325a002d2 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.104530209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c548f7544cbb6646beaf3e60e066fd007e36751aa6d64dc3525d2e7c313115c,PodSandboxId:2648aa7b5be82109ec33dc22d721afb5182f4314fd51e2de905ec4553b75fbdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839153737016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9v4qf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970de16b-4ade-4d82-8f78-fc83fc86fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 3e26d238,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0faefa4b1c4c329709900d137e560de3390c8153d2f934dcafed855199f336f0,PodSandboxId:8ebf0bd9db29cba925c9024a33413319840d4fa4c917e999210ed3cced56e604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839100165633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m7n7k,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: caf303ad-139a-4b42-820e-617fa654399c,},Annotations:map[string]string{io.kubernetes.container.hash: ea30a637,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec711aaa85924e7057421993cf75a2bf3b977afbbed007f940052128eb02e89,PodSandboxId:93dc7b05268240c895dc9b9c7de85b9349208e60b66b8292d6cf49c06966da6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1716933838493672775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3fd3ac-6795-4168-bd94-007932dcbb2c,},Annotations:map[string]string{io.kubernetes.container.hash: 14e90e58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bab6489dd8e95e261046821e03129e1df20511a957172d131a801383b2782c,PodSandboxId:994ab08e76d0a16f9f656192c7305743082ae5274d38c64ceee31d1490c0ae70,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1716933837933311404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df64df09-8898-44db-919c-0b1d564538ee,},Annotations:map[string]string{io.kubernetes.container.hash: fb208a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4719768083408e8d40a1b3e1767d0c299d50fc88c3d66aac9b6fd5d458ee7ac,PodSandboxId:de74adb6bb2e42045eddd14aab0a6da13119970fcec0e361690ae712e702f5f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933818509286762,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39715514d16e1aef2c805f45c43e942c,},Annotations:map[string]string{io.kubernetes.container.hash: 55d06a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30fed8617be7400eab4846e41b293ade0c1ada903f8c3e21fdcd5052ca35b41d,PodSandboxId:5819f408569516337af99087fe96a2a11a1dec54cb0fccef7a2ecc34c8394c34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933818541679246,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb34c519fc34f94122ba139e98e7226a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7456087993ce43df9e33b9136199c345eb75a44dd4be3c9eae3d8378076c3cfb,PodSandboxId:eaf3652a3a78ac206674ff795df24a67155bcb3220adf5b257f77b1588fd29dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933818451356924,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b574d3676ce396f415ec6bdfd52e3c,},Annotations:map[string]string{io.kubernetes.container.hash: c6aa01a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0d2ea508b9e928e0a175ee08f28acb732359d0cb411c0baaa7c36b7a9913bb,PodSandboxId:7c43d8161def62f299845abf9bc11d8c831b1ecb31982eb8a4dd37d9caeec00a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933818414528223,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b6925a1cfa430048d5fd4482f4cbc,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e9cc807-3e65-4888-8084-f9b325a002d2 name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.138554004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e360783e-e3ac-4b0d-951f-d2c170e3b3f5 name=/runtime.v1.RuntimeService/Version
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.138643175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e360783e-e3ac-4b0d-951f-d2c170e3b3f5 name=/runtime.v1.RuntimeService/Version
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.139713182Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b695040-d41c-40d2-b611-b5403bf5b672 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.140091947Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1716934620140068821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b695040-d41c-40d2-b611-b5403bf5b672 name=/runtime.v1.ImageService/ImageFsInfo
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.141036500Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd777e76-364e-4f05-9450-6cc9e3ae9abd name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.141105943Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd777e76-364e-4f05-9450-6cc9e3ae9abd name=/runtime.v1.RuntimeService/ListContainers
	May 28 22:17:00 default-k8s-diff-port-249165 crio[730]: time="2024-05-28 22:17:00.141354288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c548f7544cbb6646beaf3e60e066fd007e36751aa6d64dc3525d2e7c313115c,PodSandboxId:2648aa7b5be82109ec33dc22d721afb5182f4314fd51e2de905ec4553b75fbdb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839153737016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9v4qf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 970de16b-4ade-4d82-8f78-fc83fc86fc8a,},Annotations:map[string]string{io.kubernetes.container.hash: 3e26d238,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0faefa4b1c4c329709900d137e560de3390c8153d2f934dcafed855199f336f0,PodSandboxId:8ebf0bd9db29cba925c9024a33413319840d4fa4c917e999210ed3cced56e604,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1716933839100165633,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-m7n7k,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: caf303ad-139a-4b42-820e-617fa654399c,},Annotations:map[string]string{io.kubernetes.container.hash: ea30a637,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fec711aaa85924e7057421993cf75a2bf3b977afbbed007f940052128eb02e89,PodSandboxId:93dc7b05268240c895dc9b9c7de85b9349208e60b66b8292d6cf49c06966da6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1716933838493672775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be3fd3ac-6795-4168-bd94-007932dcbb2c,},Annotations:map[string]string{io.kubernetes.container.hash: 14e90e58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8bab6489dd8e95e261046821e03129e1df20511a957172d131a801383b2782c,PodSandboxId:994ab08e76d0a16f9f656192c7305743082ae5274d38c64ceee31d1490c0ae70,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING
,CreatedAt:1716933837933311404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b2nd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df64df09-8898-44db-919c-0b1d564538ee,},Annotations:map[string]string{io.kubernetes.container.hash: fb208a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4719768083408e8d40a1b3e1767d0c299d50fc88c3d66aac9b6fd5d458ee7ac,PodSandboxId:de74adb6bb2e42045eddd14aab0a6da13119970fcec0e361690ae712e702f5f2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1716933818509286762,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39715514d16e1aef2c805f45c43e942c,},Annotations:map[string]string{io.kubernetes.container.hash: 55d06a1a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30fed8617be7400eab4846e41b293ade0c1ada903f8c3e21fdcd5052ca35b41d,PodSandboxId:5819f408569516337af99087fe96a2a11a1dec54cb0fccef7a2ecc34c8394c34,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1716933818541679246,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb34c519fc34f94122ba139e98e7226a,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7456087993ce43df9e33b9136199c345eb75a44dd4be3c9eae3d8378076c3cfb,PodSandboxId:eaf3652a3a78ac206674ff795df24a67155bcb3220adf5b257f77b1588fd29dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1716933818451356924,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b574d3676ce396f415ec6bdfd52e3c,},Annotations:map[string]string{io.kubernetes.container.hash: c6aa01a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa0d2ea508b9e928e0a175ee08f28acb732359d0cb411c0baaa7c36b7a9913bb,PodSandboxId:7c43d8161def62f299845abf9bc11d8c831b1ecb31982eb8a4dd37d9caeec00a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1716933818414528223,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-249165,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832b6925a1cfa430048d5fd4482f4cbc,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd777e76-364e-4f05-9450-6cc9e3ae9abd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1c548f7544cbb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   2648aa7b5be82       coredns-7db6d8ff4d-9v4qf
	0faefa4b1c4c3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   8ebf0bd9db29c       coredns-7db6d8ff4d-m7n7k
	fec711aaa8592       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   93dc7b0526824       storage-provisioner
	c8bab6489dd8e       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   13 minutes ago      Running             kube-proxy                0                   994ab08e76d0a       kube-proxy-b2nd9
	30fed8617be74       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   13 minutes ago      Running             kube-scheduler            2                   5819f40856951       kube-scheduler-default-k8s-diff-port-249165
	b471976808340       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   13 minutes ago      Running             etcd                      2                   de74adb6bb2e4       etcd-default-k8s-diff-port-249165
	7456087993ce4       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   13 minutes ago      Running             kube-apiserver            2                   eaf3652a3a78a       kube-apiserver-default-k8s-diff-port-249165
	aa0d2ea508b9e       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   13 minutes ago      Running             kube-controller-manager   2                   7c43d8161def6       kube-controller-manager-default-k8s-diff-port-249165
	
	
	==> coredns [0faefa4b1c4c329709900d137e560de3390c8153d2f934dcafed855199f336f0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [1c548f7544cbb6646beaf3e60e066fd007e36751aa6d64dc3525d2e7c313115c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-249165
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-249165
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=default-k8s-diff-port-249165
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T22_03_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 22:03:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-249165
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 22:16:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 22:14:15 +0000   Tue, 28 May 2024 22:03:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 22:14:15 +0000   Tue, 28 May 2024 22:03:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 22:14:15 +0000   Tue, 28 May 2024 22:03:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 22:14:15 +0000   Tue, 28 May 2024 22:03:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.48
	  Hostname:    default-k8s-diff-port-249165
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a1a15b852f4408f840f7cdc28c2cdd1
	  System UUID:                1a1a15b8-52f4-408f-840f-7cdc28c2cdd1
	  Boot ID:                    1525e0b5-a615-412d-8626-275908ae12e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-9v4qf                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-m7n7k                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-default-k8s-diff-port-249165                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-default-k8s-diff-port-249165             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-249165    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-b2nd9                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-diff-port-249165             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-569cc877fc-6q6pz                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node default-k8s-diff-port-249165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node default-k8s-diff-port-249165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node default-k8s-diff-port-249165 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node default-k8s-diff-port-249165 event: Registered Node default-k8s-diff-port-249165 in Controller
	
	
	==> dmesg <==
	[  +0.039684] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.631690] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.453709] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.624429] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.222216] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.060876] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052989] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.176096] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.131262] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.281323] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.309227] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.062021] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.058392] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +4.640518] kauditd_printk_skb: 97 callbacks suppressed
	[May28 21:59] kauditd_printk_skb: 50 callbacks suppressed
	[  +7.349144] kauditd_printk_skb: 27 callbacks suppressed
	[May28 22:03] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.329746] systemd-fstab-generator[3598]: Ignoring "noauto" option for root device
	[  +4.561055] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.504187] systemd-fstab-generator[3921]: Ignoring "noauto" option for root device
	[ +13.403247] systemd-fstab-generator[4130]: Ignoring "noauto" option for root device
	[  +0.116519] kauditd_printk_skb: 14 callbacks suppressed
	[May28 22:05] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [b4719768083408e8d40a1b3e1767d0c299d50fc88c3d66aac9b6fd5d458ee7ac] <==
	{"level":"info","ts":"2024-05-28T22:03:39.564655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"36b30da979eae81e became leader at term 2"}
	{"level":"info","ts":"2024-05-28T22:03:39.564663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 36b30da979eae81e elected leader 36b30da979eae81e at term 2"}
	{"level":"info","ts":"2024-05-28T22:03:39.567601Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:03:39.569748Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"36b30da979eae81e","local-member-attributes":"{Name:default-k8s-diff-port-249165 ClientURLs:[https://192.168.72.48:2379]}","request-path":"/0/members/36b30da979eae81e/attributes","cluster-id":"a85db1df86d6d05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T22:03:39.571176Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a85db1df86d6d05","local-member-id":"36b30da979eae81e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:03:39.571304Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T22:03:39.571407Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T22:03:39.571436Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T22:03:39.571332Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:03:39.571526Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T22:03:39.571351Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T22:03:39.573293Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.48:2379"}
	{"level":"info","ts":"2024-05-28T22:03:39.57827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-05-28T22:09:30.742527Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.410561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T22:09:30.742708Z","caller":"traceutil/trace.go:171","msg":"trace[461816137] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:718; }","duration":"203.724242ms","start":"2024-05-28T22:09:30.538949Z","end":"2024-05-28T22:09:30.742673Z","steps":["trace[461816137] 'range keys from in-memory index tree'  (duration: 203.28865ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T22:10:29.591911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"619.512308ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T22:10:29.592061Z","caller":"traceutil/trace.go:171","msg":"trace[1470547791] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:766; }","duration":"619.734222ms","start":"2024-05-28T22:10:28.972296Z","end":"2024-05-28T22:10:29.59203Z","steps":["trace[1470547791] 'range keys from in-memory index tree'  (duration: 619.397905ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T22:10:29.59211Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T22:10:28.972283Z","time spent":"619.813582ms","remote":"127.0.0.1:49126","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-05-28T22:10:29.592697Z","caller":"traceutil/trace.go:171","msg":"trace[214288093] transaction","detail":"{read_only:false; response_revision:767; number_of_response:1; }","duration":"143.963892ms","start":"2024-05-28T22:10:29.448712Z","end":"2024-05-28T22:10:29.592676Z","steps":["trace[214288093] 'process raft request'  (duration: 131.336203ms)","trace[214288093] 'compare'  (duration: 11.690437ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-28T22:10:30.152869Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"314.178366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-28T22:10:30.153728Z","caller":"traceutil/trace.go:171","msg":"trace[473617281] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:767; }","duration":"315.021154ms","start":"2024-05-28T22:10:29.838639Z","end":"2024-05-28T22:10:30.15366Z","steps":["trace[473617281] 'range keys from in-memory index tree'  (duration: 314.079243ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-28T22:10:30.153824Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-28T22:10:29.838618Z","time spent":"315.184881ms","remote":"127.0.0.1:49112","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-05-28T22:13:39.635636Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":675}
	{"level":"info","ts":"2024-05-28T22:13:39.644571Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":675,"took":"8.565462ms","hash":1502472490,"current-db-size-bytes":2269184,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2269184,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-05-28T22:13:39.644626Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1502472490,"revision":675,"compact-revision":-1}
	
	
	==> kernel <==
	 22:17:00 up 18 min,  0 users,  load average: 0.22, 0.15, 0.11
	Linux default-k8s-diff-port-249165 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7456087993ce43df9e33b9136199c345eb75a44dd4be3c9eae3d8378076c3cfb] <==
	I0528 22:11:42.022717       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:13:41.024694       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:13:41.024962       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0528 22:13:42.025975       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:13:42.026086       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:13:42.026135       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:13:42.026038       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:13:42.026285       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:13:42.027949       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:14:42.026801       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:14:42.026878       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:14:42.026887       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:14:42.028107       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:14:42.028183       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:14:42.028190       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:16:42.027277       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:16:42.027382       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0528 22:16:42.027390       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0528 22:16:42.028621       1 handler_proxy.go:93] no RequestInfo found in the context
	E0528 22:16:42.028708       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:16:42.028716       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [aa0d2ea508b9e928e0a175ee08f28acb732359d0cb411c0baaa7c36b7a9913bb] <==
	I0528 22:11:27.060983       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:11:56.577980       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:11:57.068316       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:12:26.582700       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:12:27.076780       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:12:56.588018       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:12:57.084701       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:13:26.592660       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:13:27.092592       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:13:56.598228       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:13:57.101190       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:14:26.603333       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:14:27.108897       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0528 22:14:52.674380       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="105.117µs"
	E0528 22:14:56.607973       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:14:57.117384       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0528 22:15:04.671929       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="96.688µs"
	E0528 22:15:26.613105       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:15:27.125220       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:15:56.622060       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:15:57.133701       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:16:26.626383       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:16:27.142177       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0528 22:16:56.630718       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0528 22:16:57.150904       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c8bab6489dd8e95e261046821e03129e1df20511a957172d131a801383b2782c] <==
	I0528 22:03:58.316691       1 server_linux.go:69] "Using iptables proxy"
	I0528 22:03:58.341274       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.48"]
	I0528 22:03:58.416759       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0528 22:03:58.416812       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0528 22:03:58.416853       1 server_linux.go:165] "Using iptables Proxier"
	I0528 22:03:58.422704       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 22:03:58.422941       1 server.go:872] "Version info" version="v1.30.1"
	I0528 22:03:58.422956       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 22:03:58.424384       1 config.go:192] "Starting service config controller"
	I0528 22:03:58.424399       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 22:03:58.424423       1 config.go:101] "Starting endpoint slice config controller"
	I0528 22:03:58.424426       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 22:03:58.425063       1 config.go:319] "Starting node config controller"
	I0528 22:03:58.425070       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 22:03:58.524785       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0528 22:03:58.524902       1 shared_informer.go:320] Caches are synced for service config
	I0528 22:03:58.526553       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [30fed8617be7400eab4846e41b293ade0c1ada903f8c3e21fdcd5052ca35b41d] <==
	W0528 22:03:41.061226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 22:03:41.061257       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0528 22:03:41.061331       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 22:03:41.061362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0528 22:03:41.061442       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 22:03:41.062544       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 22:03:41.061682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 22:03:41.062595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 22:03:41.061757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0528 22:03:41.062609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0528 22:03:41.963178       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 22:03:41.963562       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0528 22:03:41.975413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 22:03:41.975548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0528 22:03:42.019400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 22:03:42.019445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0528 22:03:42.023228       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 22:03:42.023561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0528 22:03:42.270972       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 22:03:42.271017       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 22:03:42.275553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 22:03:42.275743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0528 22:03:42.282173       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 22:03:42.282334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0528 22:03:44.630655       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 22:14:43 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:14:43.695026    3928 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:14:43 default-k8s-diff-port-249165 kubelet[3928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:14:43 default-k8s-diff-port-249165 kubelet[3928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:14:43 default-k8s-diff-port-249165 kubelet[3928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:14:43 default-k8s-diff-port-249165 kubelet[3928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:14:52 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:14:52.658186    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:15:04 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:15:04.658328    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:15:16 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:15:16.658403    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:15:29 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:15:29.658137    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:15:42 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:15:42.658182    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:15:43 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:15:43.694927    3928 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:15:43 default-k8s-diff-port-249165 kubelet[3928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:15:43 default-k8s-diff-port-249165 kubelet[3928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:15:43 default-k8s-diff-port-249165 kubelet[3928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:15:43 default-k8s-diff-port-249165 kubelet[3928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:15:56 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:15:56.658417    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:16:11 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:16:11.658269    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:16:25 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:16:25.658738    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:16:37 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:16:37.657577    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	May 28 22:16:43 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:16:43.694943    3928 iptables.go:577] "Could not set up iptables canary" err=<
	May 28 22:16:43 default-k8s-diff-port-249165 kubelet[3928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 28 22:16:43 default-k8s-diff-port-249165 kubelet[3928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 28 22:16:43 default-k8s-diff-port-249165 kubelet[3928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 28 22:16:43 default-k8s-diff-port-249165 kubelet[3928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 28 22:16:48 default-k8s-diff-port-249165 kubelet[3928]: E0528 22:16:48.658066    3928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-6q6pz" podUID="443b12f9-e99d-4bb7-ae3f-8a25ed277f44"
	
	
	==> storage-provisioner [fec711aaa85924e7057421993cf75a2bf3b977afbbed007f940052128eb02e89] <==
	I0528 22:03:58.686637       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 22:03:58.699325       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 22:03:58.699391       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 22:03:58.715562       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 22:03:58.716604       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-249165_a80cb70d-b310-4cb7-a736-dbef5dd84831!
	I0528 22:03:58.720033       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7bc4f15-7b61-4ca5-a0e6-91a662ae0cb2", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-249165_a80cb70d-b310-4cb7-a736-dbef5dd84831 became leader
	I0528 22:03:58.817594       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-249165_a80cb70d-b310-4cb7-a736-dbef5dd84831!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-249165 -n default-k8s-diff-port-249165
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-249165 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-6q6pz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-249165 describe pod metrics-server-569cc877fc-6q6pz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-249165 describe pod metrics-server-569cc877fc-6q6pz: exit status 1 (60.117724ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-6q6pz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-249165 describe pod metrics-server-569cc877fc-6q6pz: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (238.60s)

                                                
                                    

Test pass (244/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 51.99
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.1/json-events 13.58
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.05
18 TestDownloadOnly/v1.30.1/DeleteAll 0.12
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.54
22 TestOffline 64.13
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 146.33
29 TestAddons/parallel/Registry 20.47
31 TestAddons/parallel/InspektorGadget 11.79
33 TestAddons/parallel/HelmTiller 11.1
35 TestAddons/parallel/CSI 93.59
36 TestAddons/parallel/Headlamp 13.92
37 TestAddons/parallel/CloudSpanner 5.56
38 TestAddons/parallel/LocalPath 56.27
39 TestAddons/parallel/NvidiaDevicePlugin 6.63
40 TestAddons/parallel/Yakd 6.01
44 TestAddons/serial/GCPAuth/Namespaces 0.11
46 TestCertOptions 59.45
49 TestForceSystemdFlag 86.1
50 TestForceSystemdEnv 72.53
52 TestKVMDriverInstallOrUpdate 4.93
56 TestErrorSpam/setup 42.47
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.7
59 TestErrorSpam/pause 1.55
60 TestErrorSpam/unpause 1.55
61 TestErrorSpam/stop 4.88
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 98.48
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 39.74
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.96
73 TestFunctional/serial/CacheCmd/cache/add_local 2.2
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 35.63
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.31
84 TestFunctional/serial/LogsFileCmd 1.33
85 TestFunctional/serial/InvalidService 4.62
87 TestFunctional/parallel/ConfigCmd 0.29
88 TestFunctional/parallel/DashboardCmd 15.86
89 TestFunctional/parallel/DryRun 0.27
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.26
95 TestFunctional/parallel/ServiceCmdConnect 11.87
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 36.54
99 TestFunctional/parallel/SSHCmd 0.48
100 TestFunctional/parallel/CpCmd 1.37
101 TestFunctional/parallel/MySQL 27.16
102 TestFunctional/parallel/FileSync 0.24
103 TestFunctional/parallel/CertSync 1.41
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
111 TestFunctional/parallel/License 0.66
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.22
113 TestFunctional/parallel/Version/short 0.05
114 TestFunctional/parallel/Version/components 0.86
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.4
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.46
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
119 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
120 TestFunctional/parallel/ImageCommands/Setup 2.12
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.3
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
126 TestFunctional/parallel/ProfileCmd/profile_list 0.37
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
128 TestFunctional/parallel/MountCmd/any-port 22.68
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.41
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.97
131 TestFunctional/parallel/ServiceCmd/List 0.3
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
134 TestFunctional/parallel/ServiceCmd/Format 0.37
135 TestFunctional/parallel/ServiceCmd/URL 0.43
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.44
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.93
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.58
140 TestFunctional/parallel/MountCmd/specific-port 2.01
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.71
151 TestFunctional/delete_addon-resizer_images 0.06
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 200.92
158 TestMultiControlPlane/serial/DeployApp 6.6
159 TestMultiControlPlane/serial/PingHostFromPods 1.2
160 TestMultiControlPlane/serial/AddWorkerNode 45.79
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.45
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
169 TestMultiControlPlane/serial/DeleteSecondaryNode 17.91
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 351.34
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
174 TestMultiControlPlane/serial/AddSecondaryNode 70.47
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
179 TestJSONOutput/start/Command 60.6
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.69
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.59
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 9.35
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 83.84
211 TestMountStart/serial/StartWithMountFirst 28.82
212 TestMountStart/serial/VerifyMountFirst 0.36
213 TestMountStart/serial/StartWithMountSecond 25.36
214 TestMountStart/serial/VerifyMountSecond 0.35
215 TestMountStart/serial/DeleteFirst 0.67
216 TestMountStart/serial/VerifyMountPostDelete 0.35
217 TestMountStart/serial/Stop 1.26
218 TestMountStart/serial/RestartStopped 24.52
219 TestMountStart/serial/VerifyMountPostStop 0.36
222 TestMultiNode/serial/FreshStart2Nodes 97.37
223 TestMultiNode/serial/DeployApp2Nodes 5.68
224 TestMultiNode/serial/PingHostFrom2Pods 0.77
225 TestMultiNode/serial/AddNode 42.49
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 6.94
229 TestMultiNode/serial/StopNode 2.29
230 TestMultiNode/serial/StartAfterStop 28.5
232 TestMultiNode/serial/DeleteNode 2.44
234 TestMultiNode/serial/RestartMultiNode 186.07
235 TestMultiNode/serial/ValidateNameConflict 42.2
242 TestScheduledStopUnix 114.14
246 TestRunningBinaryUpgrade 225.8
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
252 TestNoKubernetes/serial/StartWithK8s 94.48
253 TestNoKubernetes/serial/StartWithStopK8s 71.55
254 TestNoKubernetes/serial/Start 52.15
262 TestNetworkPlugins/group/false 2.93
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
267 TestNoKubernetes/serial/ProfileList 4.43
268 TestNoKubernetes/serial/Stop 2.59
277 TestPause/serial/Start 96.05
278 TestNoKubernetes/serial/StartNoArgs 69.98
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
280 TestStoppedBinaryUpgrade/Setup 2.67
281 TestStoppedBinaryUpgrade/Upgrade 96.36
283 TestNetworkPlugins/group/auto/Start 100.31
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
285 TestNetworkPlugins/group/kindnet/Start 74.69
286 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
287 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
288 TestNetworkPlugins/group/kindnet/NetCatPod 10.33
289 TestNetworkPlugins/group/auto/KubeletFlags 0.19
290 TestNetworkPlugins/group/auto/NetCatPod 11.23
291 TestNetworkPlugins/group/kindnet/DNS 0.25
292 TestNetworkPlugins/group/kindnet/Localhost 0.15
293 TestNetworkPlugins/group/kindnet/HairPin 0.12
294 TestNetworkPlugins/group/auto/DNS 0.17
295 TestNetworkPlugins/group/auto/Localhost 0.15
296 TestNetworkPlugins/group/auto/HairPin 0.14
297 TestNetworkPlugins/group/calico/Start 91.68
298 TestNetworkPlugins/group/custom-flannel/Start 102.33
299 TestNetworkPlugins/group/calico/ControllerPod 6.01
300 TestNetworkPlugins/group/calico/KubeletFlags 0.24
301 TestNetworkPlugins/group/calico/NetCatPod 10.23
302 TestNetworkPlugins/group/calico/DNS 0.18
303 TestNetworkPlugins/group/calico/Localhost 0.16
304 TestNetworkPlugins/group/calico/HairPin 0.13
305 TestNetworkPlugins/group/flannel/Start 86.05
306 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
307 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.23
308 TestNetworkPlugins/group/custom-flannel/DNS 0.19
309 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
310 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
311 TestNetworkPlugins/group/bridge/Start 114.02
312 TestNetworkPlugins/group/enable-default-cni/Start 123.34
313 TestNetworkPlugins/group/flannel/ControllerPod 6.01
314 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
315 TestNetworkPlugins/group/flannel/NetCatPod 11.22
316 TestNetworkPlugins/group/flannel/DNS 0.16
317 TestNetworkPlugins/group/flannel/Localhost 0.15
318 TestNetworkPlugins/group/flannel/HairPin 0.13
321 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
322 TestNetworkPlugins/group/bridge/NetCatPod 10.27
323 TestNetworkPlugins/group/bridge/DNS 0.2
324 TestNetworkPlugins/group/bridge/Localhost 0.14
325 TestNetworkPlugins/group/bridge/HairPin 0.12
326 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
327 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.3
329 TestStartStop/group/no-preload/serial/FirstStart 111.19
330 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
331 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
332 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
334 TestStartStop/group/embed-certs/serial/FirstStart 97.3
335 TestStartStop/group/no-preload/serial/DeployApp 9.28
336 TestStartStop/group/embed-certs/serial/DeployApp 9.31
337 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
339 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
345 TestStartStop/group/no-preload/serial/SecondStart 651.13
346 TestStartStop/group/embed-certs/serial/SecondStart 568.76
347 TestStartStop/group/old-k8s-version/serial/Stop 4.53
348 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 237.76
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.3
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
356 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 619.87
365 TestStartStop/group/newest-cni/serial/FirstStart 57.4
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
368 TestStartStop/group/newest-cni/serial/Stop 7.58
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
370 TestStartStop/group/newest-cni/serial/SecondStart 34.72
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
374 TestStartStop/group/newest-cni/serial/Pause 2.41
x
+
TestDownloadOnly/v1.20.0/json-events (51.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-610519 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-610519 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (51.986967381s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (51.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-610519
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-610519: exit status 85 (57.798738ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-610519 | jenkins | v1.33.1 | 28 May 24 20:21 UTC |          |
	|         | -p download-only-610519        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 20:21:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 20:21:09.313905   11772 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:21:09.314166   11772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:21:09.314177   11772 out.go:304] Setting ErrFile to fd 2...
	I0528 20:21:09.314181   11772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:21:09.314367   11772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	W0528 20:21:09.314474   11772 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18966-3963/.minikube/config/config.json: open /home/jenkins/minikube-integration/18966-3963/.minikube/config/config.json: no such file or directory
	I0528 20:21:09.315010   11772 out.go:298] Setting JSON to true
	I0528 20:21:09.315852   11772 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":212,"bootTime":1716927457,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 20:21:09.315905   11772 start.go:139] virtualization: kvm guest
	I0528 20:21:09.318391   11772 out.go:97] [download-only-610519] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 20:21:09.319937   11772 out.go:169] MINIKUBE_LOCATION=18966
	W0528 20:21:09.318500   11772 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball: no such file or directory
	I0528 20:21:09.318545   11772 notify.go:220] Checking for updates...
	I0528 20:21:09.322714   11772 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 20:21:09.324055   11772 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:21:09.325289   11772 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:21:09.326565   11772 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0528 20:21:09.328765   11772 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0528 20:21:09.329012   11772 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 20:21:09.426833   11772 out.go:97] Using the kvm2 driver based on user configuration
	I0528 20:21:09.426864   11772 start.go:297] selected driver: kvm2
	I0528 20:21:09.426873   11772 start.go:901] validating driver "kvm2" against <nil>
	I0528 20:21:09.427212   11772 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:21:09.427339   11772 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 20:21:09.441630   11772 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 20:21:09.441675   11772 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 20:21:09.442180   11772 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0528 20:21:09.442347   11772 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0528 20:21:09.442376   11772 cni.go:84] Creating CNI manager for ""
	I0528 20:21:09.442386   11772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 20:21:09.442400   11772 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0528 20:21:09.442459   11772 start.go:340] cluster config:
	{Name:download-only-610519 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-610519 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:21:09.442624   11772 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:21:09.444626   11772 out.go:97] Downloading VM boot image ...
	I0528 20:21:09.444667   11772 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18966-3963/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0528 20:21:19.642717   11772 out.go:97] Starting "download-only-610519" primary control-plane node in "download-only-610519" cluster
	I0528 20:21:19.642755   11772 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 20:21:19.748592   11772 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0528 20:21:19.748629   11772 cache.go:56] Caching tarball of preloaded images
	I0528 20:21:19.748821   11772 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 20:21:19.750849   11772 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0528 20:21:19.750870   11772 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0528 20:21:19.858420   11772 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0528 20:21:33.681350   11772 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0528 20:21:33.681447   11772 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0528 20:21:34.582412   11772 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0528 20:21:34.582814   11772 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/download-only-610519/config.json ...
	I0528 20:21:34.582851   11772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/download-only-610519/config.json: {Name:mk6227a885e2e4cfc473b70b4d35214f4820b1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:21:34.583030   11772 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0528 20:21:34.583225   11772 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18966-3963/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-610519 host does not exist
	  To start a cluster, run: "minikube start -p download-only-610519"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-610519
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (13.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-984992 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-984992 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.580439297s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (13.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-984992
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-984992: exit status 85 (51.938331ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-610519 | jenkins | v1.33.1 | 28 May 24 20:21 UTC |                     |
	|         | -p download-only-610519        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:22 UTC |
	| delete  | -p download-only-610519        | download-only-610519 | jenkins | v1.33.1 | 28 May 24 20:22 UTC | 28 May 24 20:22 UTC |
	| start   | -o=json --download-only        | download-only-984992 | jenkins | v1.33.1 | 28 May 24 20:22 UTC |                     |
	|         | -p download-only-984992        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 20:22:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 20:22:01.602203   12100 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:22:01.602288   12100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:22:01.602292   12100 out.go:304] Setting ErrFile to fd 2...
	I0528 20:22:01.602297   12100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:22:01.602444   12100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:22:01.602921   12100 out.go:298] Setting JSON to true
	I0528 20:22:01.603721   12100 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":265,"bootTime":1716927457,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 20:22:01.603774   12100 start.go:139] virtualization: kvm guest
	I0528 20:22:01.605871   12100 out.go:97] [download-only-984992] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 20:22:01.607287   12100 out.go:169] MINIKUBE_LOCATION=18966
	I0528 20:22:01.605990   12100 notify.go:220] Checking for updates...
	I0528 20:22:01.609688   12100 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 20:22:01.610868   12100 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:22:01.612113   12100 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:22:01.613288   12100 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0528 20:22:01.615348   12100 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0528 20:22:01.615547   12100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 20:22:01.647301   12100 out.go:97] Using the kvm2 driver based on user configuration
	I0528 20:22:01.647327   12100 start.go:297] selected driver: kvm2
	I0528 20:22:01.647332   12100 start.go:901] validating driver "kvm2" against <nil>
	I0528 20:22:01.647647   12100 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:22:01.647722   12100 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18966-3963/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0528 20:22:01.661587   12100 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0528 20:22:01.661627   12100 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 20:22:01.662115   12100 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0528 20:22:01.662253   12100 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0528 20:22:01.662300   12100 cni.go:84] Creating CNI manager for ""
	I0528 20:22:01.662311   12100 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0528 20:22:01.662319   12100 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0528 20:22:01.662369   12100 start.go:340] cluster config:
	{Name:download-only-984992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-984992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:22:01.662454   12100 iso.go:125] acquiring lock: {Name:mk0ef48bd19f4019c3ee87391d15b9afc1d5e8d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:22:01.664127   12100 out.go:97] Starting "download-only-984992" primary control-plane node in "download-only-984992" cluster
	I0528 20:22:01.664143   12100 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:22:01.821828   12100 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0528 20:22:01.821858   12100 cache.go:56] Caching tarball of preloaded images
	I0528 20:22:01.822029   12100 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0528 20:22:01.823868   12100 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0528 20:22:01.823883   12100 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 ...
	I0528 20:22:02.019686   12100 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:a8c8ea593b2bc93a46ce7b040a44f86d -> /home/jenkins/minikube-integration/18966-3963/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-984992 host does not exist
	  To start a cluster, run: "minikube start -p download-only-984992"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-984992
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-408531 --alsologtostderr --binary-mirror http://127.0.0.1:38549 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-408531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-408531
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (64.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-159184 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-159184 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m3.12996559s)
helpers_test.go:175: Cleaning up "offline-crio-159184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-159184
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-159184: (1.00265025s)
--- PASS: TestOffline (64.13s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-307023
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-307023: exit status 85 (52.682668ms)

                                                
                                                
-- stdout --
	* Profile "addons-307023" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-307023"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-307023
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-307023: exit status 85 (52.331451ms)

                                                
                                                
-- stdout --
	* Profile "addons-307023" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-307023"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (146.33s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-307023 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-307023 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.332772691s)
--- PASS: TestAddons/Setup (146.33s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 21.709056ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-g8f66" [d44205f8-5d8f-4cb5-86a9-a06ec1a83ab3] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004384052s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6v96c" [c226957d-d70d-48ff-85a3-d800697e600d] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004375996s
addons_test.go:342: (dbg) Run:  kubectl --context addons-307023 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-307023 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-307023 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.606360605s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 ip
2024/05/28 20:25:02 [DEBUG] GET http://192.168.39.230:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.47s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pvbk8" [8c7837de-ee55-44c4-b500-20c02c198812] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004427164s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-307023
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-307023: (5.783720817s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.1s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.271975ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-9kf86" [2e6adf96-5773-4664-abee-77443509067d] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005031969s
addons_test.go:475: (dbg) Run:  kubectl --context addons-307023 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-307023 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.485672473s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.10s)

                                                
                                    
x
+
TestAddons/parallel/CSI (93.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 18.745148ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-307023 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-307023 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f61295b7-6435-4208-9e62-b014ba34082c] Pending
helpers_test.go:344: "task-pv-pod" [f61295b7-6435-4208-9e62-b014ba34082c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f61295b7-6435-4208-9e62-b014ba34082c] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004749676s
addons_test.go:586: (dbg) Run:  kubectl --context addons-307023 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-307023 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-307023 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-307023 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-307023 delete pod task-pv-pod: (1.119186153s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-307023 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-307023 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-307023 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [090ca62d-c092-46fb-a591-90b5a58a4ca4] Pending
helpers_test.go:344: "task-pv-pod-restore" [090ca62d-c092-46fb-a591-90b5a58a4ca4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [090ca62d-c092-46fb-a591-90b5a58a4ca4] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003477825s
addons_test.go:628: (dbg) Run:  kubectl --context addons-307023 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-307023 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-307023 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-307023 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.786639372s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (93.59s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-307023 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-jtz8c" [b1e41c1e-f373-4b51-9cf7-70350652cb99] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-jtz8c" [b1e41c1e-f373-4b51-9cf7-70350652cb99] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004441836s
--- PASS: TestAddons/parallel/Headlamp (13.92s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-jjzp5" [5804785e-209c-4ef8-ba32-e0f856f317d2] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003387149s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-307023
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-307023 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-307023 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-307023 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [aab76638-bfe7-4dd1-a71a-aabba4bcc72b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [aab76638-bfe7-4dd1-a71a-aabba4bcc72b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [aab76638-bfe7-4dd1-a71a-aabba4bcc72b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003998751s
addons_test.go:992: (dbg) Run:  kubectl --context addons-307023 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 ssh "cat /opt/local-path-provisioner/pvc-ea111a43-617c-4baa-a9fd-5cb0ed5a97d7_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-307023 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-307023 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-307023 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-307023 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.475862635s)
--- PASS: TestAddons/parallel/LocalPath (56.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fw58d" [9a054b41-fa5f-4c2b-bac0-5e8f84e8860f] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005174829s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-307023
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.63s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-wpxcg" [a9c0d228-38f8-4c7f-99d8-bd87c9f25ce2] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004520271s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-307023 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-307023 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (59.45s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-104928 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-104928 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (58.026991569s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-104928 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-104928 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-104928 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-104928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-104928
--- PASS: TestCertOptions (59.45s)

                                                
                                    
x
+
TestForceSystemdFlag (86.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-081566 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-081566 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m24.950543432s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-081566 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-081566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-081566
--- PASS: TestForceSystemdFlag (86.10s)

                                                
                                    
x
+
TestForceSystemdEnv (72.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-211616 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-211616 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.553671646s)
helpers_test.go:175: Cleaning up "force-systemd-env-211616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-211616
--- PASS: TestForceSystemdEnv (72.53s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.93s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.93s)

                                                
                                    
x
+
TestErrorSpam/setup (42.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-386308 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-386308 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-386308 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-386308 --driver=kvm2  --container-runtime=crio: (42.466532272s)
--- PASS: TestErrorSpam/setup (42.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 unpause
--- PASS: TestErrorSpam/unpause (1.55s)

                                                
                                    
x
+
TestErrorSpam/stop (4.88s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 stop: (2.276892252s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 stop: (1.331455079s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-386308 --log_dir /tmp/nospam-386308 stop: (1.273646229s)
--- PASS: TestErrorSpam/stop (4.88s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18966-3963/.minikube/files/etc/test/nested/copy/11760/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-193928 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0528 20:34:42.598682   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:34:42.604276   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:34:42.614508   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:34:42.634795   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:34:42.675077   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:34:42.755360   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:34:42.915751   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:34:43.236340   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:34:43.877254   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:34:45.157720   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:34:47.718214   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:34:52.838678   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:35:03.079503   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:35:23.560054   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:36:04.521953   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-193928 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m38.482933285s)
--- PASS: TestFunctional/serial/StartWithProxy (98.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.74s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-193928 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-193928 --alsologtostderr -v=8: (39.736999558s)
functional_test.go:659: soft start took 39.73765688s for "functional-193928" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.74s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-193928 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-193928 cache add registry.k8s.io/pause:3.3: (1.040285607s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-193928 /tmp/TestFunctionalserialCacheCmdcacheadd_local2179107425/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 cache add minikube-local-cache-test:functional-193928
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-193928 cache add minikube-local-cache-test:functional-193928: (1.868991243s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 cache delete minikube-local-cache-test:functional-193928
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-193928
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193928 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.875615ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 kubectl -- --context functional-193928 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-193928 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.63s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-193928 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0528 20:37:26.442677   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-193928 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.628155726s)
functional_test.go:757: restart took 35.628274179s for "functional-193928" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.63s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-193928 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-193928 logs: (1.308565472s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 logs --file /tmp/TestFunctionalserialLogsFileCmd2922044982/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-193928 logs --file /tmp/TestFunctionalserialLogsFileCmd2922044982/001/logs.txt: (1.330312566s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-193928 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-193928
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-193928: exit status 115 (278.606602ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.19:30851 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-193928 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-193928 delete -f testdata/invalidsvc.yaml: (1.138150097s)
--- PASS: TestFunctional/serial/InvalidService (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193928 config get cpus: exit status 14 (54.488241ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193928 config get cpus: exit status 14 (39.832843ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-193928 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-193928 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22170: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-193928 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-193928 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (134.58501ms)

                                                
                                                
-- stdout --
	* [functional-193928] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:37:50.229818   20999 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:37:50.230059   20999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:37:50.230067   20999 out.go:304] Setting ErrFile to fd 2...
	I0528 20:37:50.230071   20999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:37:50.230324   20999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:37:50.230858   20999 out.go:298] Setting JSON to false
	I0528 20:37:50.231881   20999 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1213,"bootTime":1716927457,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 20:37:50.231952   20999 start.go:139] virtualization: kvm guest
	I0528 20:37:50.234688   20999 out.go:177] * [functional-193928] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 20:37:50.236008   20999 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 20:37:50.237352   20999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 20:37:50.236062   20999 notify.go:220] Checking for updates...
	I0528 20:37:50.239774   20999 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:37:50.240949   20999 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:37:50.242104   20999 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 20:37:50.243320   20999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 20:37:50.244822   20999 config.go:182] Loaded profile config "functional-193928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:37:50.245180   20999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:37:50.245233   20999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:37:50.260205   20999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I0528 20:37:50.260678   20999 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:37:50.261252   20999 main.go:141] libmachine: Using API Version  1
	I0528 20:37:50.261277   20999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:37:50.261586   20999 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:37:50.261775   20999 main.go:141] libmachine: (functional-193928) Calling .DriverName
	I0528 20:37:50.262121   20999 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 20:37:50.262523   20999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:37:50.262563   20999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:37:50.276741   20999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43877
	I0528 20:37:50.277179   20999 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:37:50.277669   20999 main.go:141] libmachine: Using API Version  1
	I0528 20:37:50.277694   20999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:37:50.278006   20999 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:37:50.278228   20999 main.go:141] libmachine: (functional-193928) Calling .DriverName
	I0528 20:37:50.314256   20999 out.go:177] * Using the kvm2 driver based on existing profile
	I0528 20:37:50.315502   20999 start.go:297] selected driver: kvm2
	I0528 20:37:50.315516   20999 start.go:901] validating driver "kvm2" against &{Name:functional-193928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-193928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:37:50.315645   20999 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 20:37:50.317710   20999 out.go:177] 
	W0528 20:37:50.318895   20999 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0528 20:37:50.320032   20999 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-193928 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-193928 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-193928 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (200.551594ms)

                                                
                                                
-- stdout --
	* [functional-193928] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 20:37:50.499979   21069 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:37:50.500218   21069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:37:50.500228   21069 out.go:304] Setting ErrFile to fd 2...
	I0528 20:37:50.500233   21069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:37:50.500515   21069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 20:37:50.501062   21069 out.go:298] Setting JSON to false
	I0528 20:37:50.502023   21069 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1213,"bootTime":1716927457,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 20:37:50.502082   21069 start.go:139] virtualization: kvm guest
	I0528 20:37:50.503996   21069 out.go:177] * [functional-193928] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0528 20:37:50.505442   21069 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 20:37:50.505461   21069 notify.go:220] Checking for updates...
	I0528 20:37:50.506825   21069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 20:37:50.508678   21069 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 20:37:50.510121   21069 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 20:37:50.511653   21069 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 20:37:50.512940   21069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 20:37:50.514821   21069 config.go:182] Loaded profile config "functional-193928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 20:37:50.515434   21069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:37:50.515486   21069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:37:50.534683   21069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37191
	I0528 20:37:50.535118   21069 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:37:50.535777   21069 main.go:141] libmachine: Using API Version  1
	I0528 20:37:50.535797   21069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:37:50.536172   21069 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:37:50.536374   21069 main.go:141] libmachine: (functional-193928) Calling .DriverName
	I0528 20:37:50.536617   21069 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 20:37:50.536909   21069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 20:37:50.536957   21069 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 20:37:50.553040   21069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0528 20:37:50.553467   21069 main.go:141] libmachine: () Calling .GetVersion
	I0528 20:37:50.553946   21069 main.go:141] libmachine: Using API Version  1
	I0528 20:37:50.553971   21069 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 20:37:50.554352   21069 main.go:141] libmachine: () Calling .GetMachineName
	I0528 20:37:50.554553   21069 main.go:141] libmachine: (functional-193928) Calling .DriverName
	I0528 20:37:50.640991   21069 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0528 20:37:50.646294   21069 start.go:297] selected driver: kvm2
	I0528 20:37:50.646311   21069 start.go:901] validating driver "kvm2" against &{Name:functional-193928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-193928 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:37:50.646413   21069 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 20:37:50.649858   21069 out.go:177] 
	W0528 20:37:50.651121   21069 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0528 20:37:50.652737   21069 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-193928 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-193928 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-5qhpw" [7c79e135-2831-4883-9922-6b6df500534b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-5qhpw" [7c79e135-2831-4883-9922-6b6df500534b] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003610961s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.19:31185
functional_test.go:1671: http://192.168.39.19:31185: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-5qhpw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.19:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.19:31185
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.87s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d1da85a6-b878-43f2-b970-db976b994b9b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.032684332s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-193928 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-193928 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-193928 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-193928 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-193928 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [aaf0fc94-d15c-4a9f-894f-39f0f0d58e2c] Pending
helpers_test.go:344: "sp-pod" [aaf0fc94-d15c-4a9f-894f-39f0f0d58e2c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [aaf0fc94-d15c-4a9f-894f-39f0f0d58e2c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.030056133s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-193928 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-193928 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-193928 delete -f testdata/storage-provisioner/pod.yaml: (2.946537054s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-193928 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [52015ca1-21d0-4846-9509-bbbf1506264a] Pending
helpers_test.go:344: "sp-pod" [52015ca1-21d0-4846-9509-bbbf1506264a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2024/05/28 20:38:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [52015ca1-21d0-4846-9509-bbbf1506264a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004375055s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-193928 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.54s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh -n functional-193928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 cp functional-193928:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2232269464/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh -n functional-193928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh -n functional-193928 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-193928 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-hbfqk" [5c9d33c7-eddd-462a-9202-8dfbb083b68d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-hbfqk" [5c9d33c7-eddd-462a-9202-8dfbb083b68d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.007424676s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-193928 exec mysql-64454c8b5c-hbfqk -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-193928 exec mysql-64454c8b5c-hbfqk -- mysql -ppassword -e "show databases;": exit status 1 (326.647169ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-193928 exec mysql-64454c8b5c-hbfqk -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-193928 exec mysql-64454c8b5c-hbfqk -- mysql -ppassword -e "show databases;": exit status 1 (410.652527ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-193928 exec mysql-64454c8b5c-hbfqk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11760/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "sudo cat /etc/test/nested/copy/11760/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11760.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "sudo cat /etc/ssl/certs/11760.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11760.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "sudo cat /usr/share/ca-certificates/11760.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/117602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "sudo cat /etc/ssl/certs/117602.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/117602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "sudo cat /usr/share/ca-certificates/117602.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-193928 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193928 ssh "sudo systemctl is-active docker": exit status 1 (225.581888ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193928 ssh "sudo systemctl is-active containerd": exit status 1 (218.501864ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-193928 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-193928 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-ctnbb" [366d1165-0203-41ee-b777-8ece768d0d86] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-ctnbb" [366d1165-0203-41ee-b777-8ece768d0d86] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004054078s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-193928 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-193928
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-193928
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-193928 image ls --format short --alsologtostderr:
I0528 20:38:08.830871   22147 out.go:291] Setting OutFile to fd 1 ...
I0528 20:38:08.830973   22147 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 20:38:08.830984   22147 out.go:304] Setting ErrFile to fd 2...
I0528 20:38:08.830990   22147 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 20:38:08.831173   22147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
I0528 20:38:08.831711   22147 config.go:182] Loaded profile config "functional-193928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 20:38:08.831845   22147 config.go:182] Loaded profile config "functional-193928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 20:38:08.832300   22147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0528 20:38:08.832344   22147 main.go:141] libmachine: Launching plugin server for driver kvm2
I0528 20:38:08.847044   22147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
I0528 20:38:08.847498   22147 main.go:141] libmachine: () Calling .GetVersion
I0528 20:38:08.848101   22147 main.go:141] libmachine: Using API Version  1
I0528 20:38:08.848123   22147 main.go:141] libmachine: () Calling .SetConfigRaw
I0528 20:38:08.848543   22147 main.go:141] libmachine: () Calling .GetMachineName
I0528 20:38:08.848753   22147 main.go:141] libmachine: (functional-193928) Calling .GetState
I0528 20:38:08.850648   22147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0528 20:38:08.850680   22147 main.go:141] libmachine: Launching plugin server for driver kvm2
I0528 20:38:08.864916   22147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41549
I0528 20:38:08.865362   22147 main.go:141] libmachine: () Calling .GetVersion
I0528 20:38:08.865875   22147 main.go:141] libmachine: Using API Version  1
I0528 20:38:08.865892   22147 main.go:141] libmachine: () Calling .SetConfigRaw
I0528 20:38:08.866164   22147 main.go:141] libmachine: () Calling .GetMachineName
I0528 20:38:08.866341   22147 main.go:141] libmachine: (functional-193928) Calling .DriverName
I0528 20:38:08.866535   22147 ssh_runner.go:195] Run: systemctl --version
I0528 20:38:08.866555   22147 main.go:141] libmachine: (functional-193928) Calling .GetSSHHostname
I0528 20:38:08.869213   22147 main.go:141] libmachine: (functional-193928) DBG | domain functional-193928 has defined MAC address 52:54:00:66:68:6d in network mk-functional-193928
I0528 20:38:08.869627   22147 main.go:141] libmachine: (functional-193928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:68:6d", ip: ""} in network mk-functional-193928: {Iface:virbr1 ExpiryTime:2024-05-28 21:34:42 +0000 UTC Type:0 Mac:52:54:00:66:68:6d Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:functional-193928 Clientid:01:52:54:00:66:68:6d}
I0528 20:38:08.869660   22147 main.go:141] libmachine: (functional-193928) DBG | domain functional-193928 has defined IP address 192.168.39.19 and MAC address 52:54:00:66:68:6d in network mk-functional-193928
I0528 20:38:08.869780   22147 main.go:141] libmachine: (functional-193928) Calling .GetSSHPort
I0528 20:38:08.869930   22147 main.go:141] libmachine: (functional-193928) Calling .GetSSHKeyPath
I0528 20:38:08.870084   22147 main.go:141] libmachine: (functional-193928) Calling .GetSSHUsername
I0528 20:38:08.870218   22147 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/functional-193928/id_rsa Username:docker}
I0528 20:38:09.047830   22147 ssh_runner.go:195] Run: sudo crictl images --output json
I0528 20:38:09.187343   22147 main.go:141] libmachine: Making call to close driver server
I0528 20:38:09.187360   22147 main.go:141] libmachine: (functional-193928) Calling .Close
I0528 20:38:09.187611   22147 main.go:141] libmachine: Successfully made call to close driver server
I0528 20:38:09.187621   22147 main.go:141] libmachine: (functional-193928) DBG | Closing plugin on server side
I0528 20:38:09.187629   22147 main.go:141] libmachine: Making call to close connection to plugin binary
I0528 20:38:09.187640   22147 main.go:141] libmachine: Making call to close driver server
I0528 20:38:09.187649   22147 main.go:141] libmachine: (functional-193928) Calling .Close
I0528 20:38:09.187879   22147 main.go:141] libmachine: Successfully made call to close driver server
I0528 20:38:09.187905   22147 main.go:141] libmachine: (functional-193928) DBG | Closing plugin on server side
I0528 20:38:09.187906   22147 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-193928 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.30.1            | 747097150317f | 85.9MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | e784f4560448b | 192MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-193928  | d97544844b9f6 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.30.1            | 25a1387cdab82 | 112MB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/google-containers/addon-resizer  | functional-193928  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/kube-scheduler          | v1.30.1            | a52dc94f0a912 | 63MB   |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-apiserver          | v1.30.1            | 91be940803172 | 118MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-193928 image ls --format table --alsologtostderr:
I0528 20:38:09.698720   22203 out.go:291] Setting OutFile to fd 1 ...
I0528 20:38:09.698827   22203 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 20:38:09.698837   22203 out.go:304] Setting ErrFile to fd 2...
I0528 20:38:09.698842   22203 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 20:38:09.699023   22203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
I0528 20:38:09.699529   22203 config.go:182] Loaded profile config "functional-193928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 20:38:09.699621   22203 config.go:182] Loaded profile config "functional-193928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 20:38:09.699996   22203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0528 20:38:09.700034   22203 main.go:141] libmachine: Launching plugin server for driver kvm2
I0528 20:38:09.715445   22203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
I0528 20:38:09.715881   22203 main.go:141] libmachine: () Calling .GetVersion
I0528 20:38:09.716763   22203 main.go:141] libmachine: Using API Version  1
I0528 20:38:09.716850   22203 main.go:141] libmachine: () Calling .SetConfigRaw
I0528 20:38:09.718247   22203 main.go:141] libmachine: () Calling .GetMachineName
I0528 20:38:09.718445   22203 main.go:141] libmachine: (functional-193928) Calling .GetState
I0528 20:38:09.720266   22203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0528 20:38:09.720315   22203 main.go:141] libmachine: Launching plugin server for driver kvm2
I0528 20:38:09.735150   22203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44583
I0528 20:38:09.735636   22203 main.go:141] libmachine: () Calling .GetVersion
I0528 20:38:09.736225   22203 main.go:141] libmachine: Using API Version  1
I0528 20:38:09.736250   22203 main.go:141] libmachine: () Calling .SetConfigRaw
I0528 20:38:09.736598   22203 main.go:141] libmachine: () Calling .GetMachineName
I0528 20:38:09.736810   22203 main.go:141] libmachine: (functional-193928) Calling .DriverName
I0528 20:38:09.737031   22203 ssh_runner.go:195] Run: systemctl --version
I0528 20:38:09.737057   22203 main.go:141] libmachine: (functional-193928) Calling .GetSSHHostname
I0528 20:38:09.739856   22203 main.go:141] libmachine: (functional-193928) DBG | domain functional-193928 has defined MAC address 52:54:00:66:68:6d in network mk-functional-193928
I0528 20:38:09.740356   22203 main.go:141] libmachine: (functional-193928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:68:6d", ip: ""} in network mk-functional-193928: {Iface:virbr1 ExpiryTime:2024-05-28 21:34:42 +0000 UTC Type:0 Mac:52:54:00:66:68:6d Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:functional-193928 Clientid:01:52:54:00:66:68:6d}
I0528 20:38:09.740380   22203 main.go:141] libmachine: (functional-193928) DBG | domain functional-193928 has defined IP address 192.168.39.19 and MAC address 52:54:00:66:68:6d in network mk-functional-193928
I0528 20:38:09.740627   22203 main.go:141] libmachine: (functional-193928) Calling .GetSSHPort
I0528 20:38:09.740786   22203 main.go:141] libmachine: (functional-193928) Calling .GetSSHKeyPath
I0528 20:38:09.740969   22203 main.go:141] libmachine: (functional-193928) Calling .GetSSHUsername
I0528 20:38:09.741100   22203 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/functional-193928/id_rsa Username:docker}
I0528 20:38:09.841055   22203 ssh_runner.go:195] Run: sudo crictl images --output json
I0528 20:38:09.926459   22203 main.go:141] libmachine: Making call to close driver server
I0528 20:38:09.926480   22203 main.go:141] libmachine: (functional-193928) Calling .Close
I0528 20:38:09.926756   22203 main.go:141] libmachine: Successfully made call to close driver server
I0528 20:38:09.926772   22203 main.go:141] libmachine: Making call to close connection to plugin binary
I0528 20:38:09.926787   22203 main.go:141] libmachine: Making call to close driver server
I0528 20:38:09.926795   22203 main.go:141] libmachine: (functional-193928) Calling .Close
I0528 20:38:09.926794   22203 main.go:141] libmachine: (functional-193928) DBG | Closing plugin on server side
I0528 20:38:09.927038   22203 main.go:141] libmachine: Successfully made call to close driver server
I0528 20:38:09.927051   22203 main.go:141] libmachine: Making call to close connection to plugin binary
I0528 20:38:09.927074   22203 main.go:141] libmachine: (functional-193928) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-193928 image ls --format json --alsologtostderr:
[{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b3
6e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52","registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"112170310"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":["registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b
3179c"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"85933465"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070","repoDigests":["docker.io/library/nginx@sha256:a484819eb60211f5299034ac80f6a681b06f89e65866ce91f356ed7c72af059c","docker.io/library/nginx@sha256:e688fed0b0c7513a63364959e7d389c37ac8ecac7a6c6a31455eca2f5a71ab8b"],"repoTags":["docker.io/library/nginx:latest"],"size":"191805953"},{"id":"ffd4cfb
be753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-193928"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":["registry.k8s.io/kube-scheduler@sha
256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036","registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"63026504"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d97544844b9f6b0a8a9e6b849893b6e52ba08acdad84d3430800d5b9e3598689","repoDigests":["localhost/minikube-local-cache-test@sha256:e4dc610770583a510089bb89fd6f3f19b9f951992958b74de4ef56dd1e6da109"],"repoTags":["localhost/minikube-local-cache-test:functional-193928"],"size":"3328"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af9
88cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea","registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117601759"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-193928 image ls --format json --alsologtostderr:
I0528 20:38:09.235477   22180 out.go:291] Setting OutFile to fd 1 ...
I0528 20:38:09.235909   22180 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 20:38:09.236024   22180 out.go:304] Setting ErrFile to fd 2...
I0528 20:38:09.236060   22180 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 20:38:09.236507   22180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
I0528 20:38:09.237172   22180 config.go:182] Loaded profile config "functional-193928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 20:38:09.237293   22180 config.go:182] Loaded profile config "functional-193928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 20:38:09.237670   22180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0528 20:38:09.237716   22180 main.go:141] libmachine: Launching plugin server for driver kvm2
I0528 20:38:09.254979   22180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34945
I0528 20:38:09.255546   22180 main.go:141] libmachine: () Calling .GetVersion
I0528 20:38:09.256193   22180 main.go:141] libmachine: Using API Version  1
I0528 20:38:09.256210   22180 main.go:141] libmachine: () Calling .SetConfigRaw
I0528 20:38:09.256635   22180 main.go:141] libmachine: () Calling .GetMachineName
I0528 20:38:09.256867   22180 main.go:141] libmachine: (functional-193928) Calling .GetState
I0528 20:38:09.258586   22180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0528 20:38:09.258620   22180 main.go:141] libmachine: Launching plugin server for driver kvm2
I0528 20:38:09.275150   22180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37819
I0528 20:38:09.275586   22180 main.go:141] libmachine: () Calling .GetVersion
I0528 20:38:09.276083   22180 main.go:141] libmachine: Using API Version  1
I0528 20:38:09.276103   22180 main.go:141] libmachine: () Calling .SetConfigRaw
I0528 20:38:09.276468   22180 main.go:141] libmachine: () Calling .GetMachineName
I0528 20:38:09.276653   22180 main.go:141] libmachine: (functional-193928) Calling .DriverName
I0528 20:38:09.276854   22180 ssh_runner.go:195] Run: systemctl --version
I0528 20:38:09.276887   22180 main.go:141] libmachine: (functional-193928) Calling .GetSSHHostname
I0528 20:38:09.279947   22180 main.go:141] libmachine: (functional-193928) DBG | domain functional-193928 has defined MAC address 52:54:00:66:68:6d in network mk-functional-193928
I0528 20:38:09.280454   22180 main.go:141] libmachine: (functional-193928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:68:6d", ip: ""} in network mk-functional-193928: {Iface:virbr1 ExpiryTime:2024-05-28 21:34:42 +0000 UTC Type:0 Mac:52:54:00:66:68:6d Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:functional-193928 Clientid:01:52:54:00:66:68:6d}
I0528 20:38:09.280545   22180 main.go:141] libmachine: (functional-193928) DBG | domain functional-193928 has defined IP address 192.168.39.19 and MAC address 52:54:00:66:68:6d in network mk-functional-193928
I0528 20:38:09.280827   22180 main.go:141] libmachine: (functional-193928) Calling .GetSSHPort
I0528 20:38:09.281023   22180 main.go:141] libmachine: (functional-193928) Calling .GetSSHKeyPath
I0528 20:38:09.281189   22180 main.go:141] libmachine: (functional-193928) Calling .GetSSHUsername
I0528 20:38:09.281356   22180 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/functional-193928/id_rsa Username:docker}
I0528 20:38:09.442484   22180 ssh_runner.go:195] Run: sudo crictl images --output json
I0528 20:38:09.527237   22180 main.go:141] libmachine: Making call to close driver server
I0528 20:38:09.527250   22180 main.go:141] libmachine: (functional-193928) Calling .Close
I0528 20:38:09.527540   22180 main.go:141] libmachine: Successfully made call to close driver server
I0528 20:38:09.527556   22180 main.go:141] libmachine: Making call to close connection to plugin binary
I0528 20:38:09.527564   22180 main.go:141] libmachine: Making call to close driver server
I0528 20:38:09.527571   22180 main.go:141] libmachine: (functional-193928) Calling .Close
I0528 20:38:09.527577   22180 main.go:141] libmachine: (functional-193928) DBG | Closing plugin on server side
I0528 20:38:09.527812   22180 main.go:141] libmachine: Successfully made call to close driver server
I0528 20:38:09.527819   22180 main.go:141] libmachine: (functional-193928) DBG | Closing plugin on server side
I0528 20:38:09.527828   22180 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-193928 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "85933465"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-193928
size: "34114467"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
- registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117601759"
- id: e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070
repoDigests:
- docker.io/library/nginx@sha256:a484819eb60211f5299034ac80f6a681b06f89e65866ce91f356ed7c72af059c
- docker.io/library/nginx@sha256:e688fed0b0c7513a63364959e7d389c37ac8ecac7a6c6a31455eca2f5a71ab8b
repoTags:
- docker.io/library/nginx:latest
size: "191805953"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: d97544844b9f6b0a8a9e6b849893b6e52ba08acdad84d3430800d5b9e3598689
repoDigests:
- localhost/minikube-local-cache-test@sha256:e4dc610770583a510089bb89fd6f3f19b9f951992958b74de4ef56dd1e6da109
repoTags:
- localhost/minikube-local-cache-test:functional-193928
size: "3328"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
- registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "112170310"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
- registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "63026504"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-193928 image ls --format yaml --alsologtostderr:
I0528 20:38:09.973154   22227 out.go:291] Setting OutFile to fd 1 ...
I0528 20:38:09.973370   22227 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 20:38:09.973378   22227 out.go:304] Setting ErrFile to fd 2...
I0528 20:38:09.973382   22227 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 20:38:09.973537   22227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
I0528 20:38:09.974058   22227 config.go:182] Loaded profile config "functional-193928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 20:38:09.974156   22227 config.go:182] Loaded profile config "functional-193928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 20:38:09.974508   22227 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0528 20:38:09.974544   22227 main.go:141] libmachine: Launching plugin server for driver kvm2
I0528 20:38:09.988677   22227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38123
I0528 20:38:09.989129   22227 main.go:141] libmachine: () Calling .GetVersion
I0528 20:38:09.989718   22227 main.go:141] libmachine: Using API Version  1
I0528 20:38:09.989736   22227 main.go:141] libmachine: () Calling .SetConfigRaw
I0528 20:38:09.990032   22227 main.go:141] libmachine: () Calling .GetMachineName
I0528 20:38:09.990214   22227 main.go:141] libmachine: (functional-193928) Calling .GetState
I0528 20:38:09.991777   22227 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0528 20:38:09.991809   22227 main.go:141] libmachine: Launching plugin server for driver kvm2
I0528 20:38:10.005871   22227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41191
I0528 20:38:10.006200   22227 main.go:141] libmachine: () Calling .GetVersion
I0528 20:38:10.006659   22227 main.go:141] libmachine: Using API Version  1
I0528 20:38:10.006679   22227 main.go:141] libmachine: () Calling .SetConfigRaw
I0528 20:38:10.007033   22227 main.go:141] libmachine: () Calling .GetMachineName
I0528 20:38:10.007255   22227 main.go:141] libmachine: (functional-193928) Calling .DriverName
I0528 20:38:10.007467   22227 ssh_runner.go:195] Run: systemctl --version
I0528 20:38:10.007487   22227 main.go:141] libmachine: (functional-193928) Calling .GetSSHHostname
I0528 20:38:10.010202   22227 main.go:141] libmachine: (functional-193928) DBG | domain functional-193928 has defined MAC address 52:54:00:66:68:6d in network mk-functional-193928
I0528 20:38:10.010606   22227 main.go:141] libmachine: (functional-193928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:68:6d", ip: ""} in network mk-functional-193928: {Iface:virbr1 ExpiryTime:2024-05-28 21:34:42 +0000 UTC Type:0 Mac:52:54:00:66:68:6d Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:functional-193928 Clientid:01:52:54:00:66:68:6d}
I0528 20:38:10.010637   22227 main.go:141] libmachine: (functional-193928) DBG | domain functional-193928 has defined IP address 192.168.39.19 and MAC address 52:54:00:66:68:6d in network mk-functional-193928
I0528 20:38:10.010765   22227 main.go:141] libmachine: (functional-193928) Calling .GetSSHPort
I0528 20:38:10.010937   22227 main.go:141] libmachine: (functional-193928) Calling .GetSSHKeyPath
I0528 20:38:10.011084   22227 main.go:141] libmachine: (functional-193928) Calling .GetSSHUsername
I0528 20:38:10.011218   22227 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/functional-193928/id_rsa Username:docker}
I0528 20:38:10.096073   22227 ssh_runner.go:195] Run: sudo crictl images --output json
I0528 20:38:10.133883   22227 main.go:141] libmachine: Making call to close driver server
I0528 20:38:10.133895   22227 main.go:141] libmachine: (functional-193928) Calling .Close
I0528 20:38:10.134167   22227 main.go:141] libmachine: Successfully made call to close driver server
I0528 20:38:10.134185   22227 main.go:141] libmachine: Making call to close connection to plugin binary
I0528 20:38:10.134203   22227 main.go:141] libmachine: (functional-193928) DBG | Closing plugin on server side
I0528 20:38:10.134207   22227 main.go:141] libmachine: Making call to close driver server
I0528 20:38:10.134235   22227 main.go:141] libmachine: (functional-193928) Calling .Close
I0528 20:38:10.134470   22227 main.go:141] libmachine: Successfully made call to close driver server
I0528 20:38:10.134490   22227 main.go:141] libmachine: Making call to close connection to plugin binary
I0528 20:38:10.134509   22227 main.go:141] libmachine: (functional-193928) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193928 ssh pgrep buildkitd: exit status 1 (186.023291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image build -t localhost/my-image:functional-193928 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-193928 image build -t localhost/my-image:functional-193928 testdata/build --alsologtostderr: (3.525116252s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-193928 image build -t localhost/my-image:functional-193928 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2b05ed1f48e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-193928
--> b31332931e3
Successfully tagged localhost/my-image:functional-193928
b31332931e322d75eb21f814c2b5dbc806a40ae2e7eccf80661ee5cf39659073
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-193928 image build -t localhost/my-image:functional-193928 testdata/build --alsologtostderr:
I0528 20:38:10.362854   22297 out.go:291] Setting OutFile to fd 1 ...
I0528 20:38:10.363107   22297 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 20:38:10.363116   22297 out.go:304] Setting ErrFile to fd 2...
I0528 20:38:10.363120   22297 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 20:38:10.363279   22297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
I0528 20:38:10.363760   22297 config.go:182] Loaded profile config "functional-193928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 20:38:10.364300   22297 config.go:182] Loaded profile config "functional-193928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0528 20:38:10.364696   22297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0528 20:38:10.364743   22297 main.go:141] libmachine: Launching plugin server for driver kvm2
I0528 20:38:10.379121   22297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42721
I0528 20:38:10.379512   22297 main.go:141] libmachine: () Calling .GetVersion
I0528 20:38:10.380041   22297 main.go:141] libmachine: Using API Version  1
I0528 20:38:10.380061   22297 main.go:141] libmachine: () Calling .SetConfigRaw
I0528 20:38:10.380364   22297 main.go:141] libmachine: () Calling .GetMachineName
I0528 20:38:10.380527   22297 main.go:141] libmachine: (functional-193928) Calling .GetState
I0528 20:38:10.382207   22297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0528 20:38:10.382243   22297 main.go:141] libmachine: Launching plugin server for driver kvm2
I0528 20:38:10.395936   22297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
I0528 20:38:10.396286   22297 main.go:141] libmachine: () Calling .GetVersion
I0528 20:38:10.396659   22297 main.go:141] libmachine: Using API Version  1
I0528 20:38:10.396712   22297 main.go:141] libmachine: () Calling .SetConfigRaw
I0528 20:38:10.397006   22297 main.go:141] libmachine: () Calling .GetMachineName
I0528 20:38:10.397181   22297 main.go:141] libmachine: (functional-193928) Calling .DriverName
I0528 20:38:10.397367   22297 ssh_runner.go:195] Run: systemctl --version
I0528 20:38:10.397393   22297 main.go:141] libmachine: (functional-193928) Calling .GetSSHHostname
I0528 20:38:10.399754   22297 main.go:141] libmachine: (functional-193928) DBG | domain functional-193928 has defined MAC address 52:54:00:66:68:6d in network mk-functional-193928
I0528 20:38:10.400057   22297 main.go:141] libmachine: (functional-193928) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:68:6d", ip: ""} in network mk-functional-193928: {Iface:virbr1 ExpiryTime:2024-05-28 21:34:42 +0000 UTC Type:0 Mac:52:54:00:66:68:6d Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:functional-193928 Clientid:01:52:54:00:66:68:6d}
I0528 20:38:10.400085   22297 main.go:141] libmachine: (functional-193928) DBG | domain functional-193928 has defined IP address 192.168.39.19 and MAC address 52:54:00:66:68:6d in network mk-functional-193928
I0528 20:38:10.400190   22297 main.go:141] libmachine: (functional-193928) Calling .GetSSHPort
I0528 20:38:10.400342   22297 main.go:141] libmachine: (functional-193928) Calling .GetSSHKeyPath
I0528 20:38:10.400508   22297 main.go:141] libmachine: (functional-193928) Calling .GetSSHUsername
I0528 20:38:10.400720   22297 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/functional-193928/id_rsa Username:docker}
I0528 20:38:10.484323   22297 build_images.go:161] Building image from path: /tmp/build.1040250284.tar
I0528 20:38:10.484392   22297 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0528 20:38:10.495382   22297 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1040250284.tar
I0528 20:38:10.500424   22297 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1040250284.tar: stat -c "%s %y" /var/lib/minikube/build/build.1040250284.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1040250284.tar': No such file or directory
I0528 20:38:10.500449   22297 ssh_runner.go:362] scp /tmp/build.1040250284.tar --> /var/lib/minikube/build/build.1040250284.tar (3072 bytes)
I0528 20:38:10.526542   22297 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1040250284
I0528 20:38:10.537269   22297 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1040250284 -xf /var/lib/minikube/build/build.1040250284.tar
I0528 20:38:10.546741   22297 crio.go:315] Building image: /var/lib/minikube/build/build.1040250284
I0528 20:38:10.546790   22297 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-193928 /var/lib/minikube/build/build.1040250284 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0528 20:38:13.822039   22297 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-193928 /var/lib/minikube/build/build.1040250284 --cgroup-manager=cgroupfs: (3.275223906s)
I0528 20:38:13.822106   22297 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1040250284
I0528 20:38:13.835306   22297 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1040250284.tar
I0528 20:38:13.844667   22297 build_images.go:217] Built localhost/my-image:functional-193928 from /tmp/build.1040250284.tar
I0528 20:38:13.844701   22297 build_images.go:133] succeeded building to: functional-193928
I0528 20:38:13.844707   22297 build_images.go:134] failed building to: 
I0528 20:38:13.844727   22297 main.go:141] libmachine: Making call to close driver server
I0528 20:38:13.844744   22297 main.go:141] libmachine: (functional-193928) Calling .Close
I0528 20:38:13.844998   22297 main.go:141] libmachine: (functional-193928) DBG | Closing plugin on server side
I0528 20:38:13.845023   22297 main.go:141] libmachine: Successfully made call to close driver server
I0528 20:38:13.845055   22297 main.go:141] libmachine: Making call to close connection to plugin binary
I0528 20:38:13.845067   22297 main.go:141] libmachine: Making call to close driver server
I0528 20:38:13.845077   22297 main.go:141] libmachine: (functional-193928) Calling .Close
I0528 20:38:13.845314   22297 main.go:141] libmachine: (functional-193928) DBG | Closing plugin on server side
I0528 20:38:13.845369   22297 main.go:141] libmachine: Successfully made call to close driver server
I0528 20:38:13.845419   22297 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.098656836s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-193928
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image load --daemon gcr.io/google-containers/addon-resizer:functional-193928 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-193928 image load --daemon gcr.io/google-containers/addon-resizer:functional-193928 --alsologtostderr: (4.09358857s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "322.425497ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "47.382847ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "274.884948ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "55.82ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (22.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-193928 /tmp/TestFunctionalparallelMountCmdany-port3808183889/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1716928661235244428" to /tmp/TestFunctionalparallelMountCmdany-port3808183889/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1716928661235244428" to /tmp/TestFunctionalparallelMountCmdany-port3808183889/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1716928661235244428" to /tmp/TestFunctionalparallelMountCmdany-port3808183889/001/test-1716928661235244428
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193928 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.925734ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 28 20:37 created-by-test
-rw-r--r-- 1 docker docker 24 May 28 20:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 28 20:37 test-1716928661235244428
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh cat /mount-9p/test-1716928661235244428
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-193928 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3d651b7b-3f08-48f4-9734-ace4df24c1d9] Pending
helpers_test.go:344: "busybox-mount" [3d651b7b-3f08-48f4-9734-ace4df24c1d9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3d651b7b-3f08-48f4-9734-ace4df24c1d9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3d651b7b-3f08-48f4-9734-ace4df24c1d9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 20.004207716s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-193928 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-193928 /tmp/TestFunctionalparallelMountCmdany-port3808183889/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (22.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image load --daemon gcr.io/google-containers/addon-resizer:functional-193928 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-193928 image load --daemon gcr.io/google-containers/addon-resizer:functional-193928 --alsologtostderr: (2.817387592s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.326871732s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-193928
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image load --daemon gcr.io/google-containers/addon-resizer:functional-193928 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-193928 image load --daemon gcr.io/google-containers/addon-resizer:functional-193928 --alsologtostderr: (8.402774618s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 service list -o json
functional_test.go:1490: Took "305.052062ms" to run "out/minikube-linux-amd64 -p functional-193928 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.19:32675
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.19:32675
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image save gcr.io/google-containers/addon-resizer:functional-193928 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-193928 image save gcr.io/google-containers/addon-resizer:functional-193928 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.436934818s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image rm gcr.io/google-containers/addon-resizer:functional-193928 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-193928 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.508351808s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-193928
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 image save --daemon gcr.io/google-containers/addon-resizer:functional-193928 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-193928 image save --daemon gcr.io/google-containers/addon-resizer:functional-193928 --alsologtostderr: (1.5405119s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-193928
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-193928 /tmp/TestFunctionalparallelMountCmdspecific-port3902799593/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193928 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (264.230293ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-193928 /tmp/TestFunctionalparallelMountCmdspecific-port3902799593/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193928 ssh "sudo umount -f /mount-9p": exit status 1 (236.130306ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-193928 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-193928 /tmp/TestFunctionalparallelMountCmdspecific-port3902799593/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-193928 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3815986764/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-193928 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3815986764/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-193928 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3815986764/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-193928 ssh "findmnt -T" /mount1: exit status 1 (311.937522ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-193928 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-193928 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-193928 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3815986764/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-193928 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3815986764/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-193928 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3815986764/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-193928
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-193928
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-193928
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-908878 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0528 20:39:42.598217   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 20:40:10.283731   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-908878 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m20.271237937s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (200.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-908878 -- rollout status deployment/busybox: (4.412041728s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-ldbfj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-ljbzs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-rfl74 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-ldbfj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-ljbzs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-rfl74 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-ldbfj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-ljbzs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-rfl74 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-ldbfj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-ldbfj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-ljbzs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-ljbzs -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-rfl74 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-908878 -- exec busybox-fc5497c4f-rfl74 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-908878 -v=7 --alsologtostderr
E0528 20:42:37.451607   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:42:37.456881   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:42:37.467122   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:42:37.487386   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:42:37.527653   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:42:37.608362   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:42:37.769370   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:42:38.089526   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:42:38.730312   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:42:40.010958   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-908878 -v=7 --alsologtostderr: (44.966061764s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
E0528 20:42:42.571881   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-908878 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp testdata/cp-test.txt ha-908878:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3657915045/001/cp-test_ha-908878.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878:/home/docker/cp-test.txt ha-908878-m02:/home/docker/cp-test_ha-908878_ha-908878-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m02 "sudo cat /home/docker/cp-test_ha-908878_ha-908878-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878:/home/docker/cp-test.txt ha-908878-m03:/home/docker/cp-test_ha-908878_ha-908878-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m03 "sudo cat /home/docker/cp-test_ha-908878_ha-908878-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878:/home/docker/cp-test.txt ha-908878-m04:/home/docker/cp-test_ha-908878_ha-908878-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m04 "sudo cat /home/docker/cp-test_ha-908878_ha-908878-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp testdata/cp-test.txt ha-908878-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3657915045/001/cp-test_ha-908878-m02.txt
E0528 20:42:47.692317   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878-m02:/home/docker/cp-test.txt ha-908878:/home/docker/cp-test_ha-908878-m02_ha-908878.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878 "sudo cat /home/docker/cp-test_ha-908878-m02_ha-908878.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878-m02:/home/docker/cp-test.txt ha-908878-m03:/home/docker/cp-test_ha-908878-m02_ha-908878-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m03 "sudo cat /home/docker/cp-test_ha-908878-m02_ha-908878-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878-m02:/home/docker/cp-test.txt ha-908878-m04:/home/docker/cp-test_ha-908878-m02_ha-908878-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m04 "sudo cat /home/docker/cp-test_ha-908878-m02_ha-908878-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp testdata/cp-test.txt ha-908878-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3657915045/001/cp-test_ha-908878-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt ha-908878:/home/docker/cp-test_ha-908878-m03_ha-908878.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878 "sudo cat /home/docker/cp-test_ha-908878-m03_ha-908878.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt ha-908878-m02:/home/docker/cp-test_ha-908878-m03_ha-908878-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m02 "sudo cat /home/docker/cp-test_ha-908878-m03_ha-908878-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878-m03:/home/docker/cp-test.txt ha-908878-m04:/home/docker/cp-test_ha-908878-m03_ha-908878-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m04 "sudo cat /home/docker/cp-test_ha-908878-m03_ha-908878-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp testdata/cp-test.txt ha-908878-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3657915045/001/cp-test_ha-908878-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt ha-908878:/home/docker/cp-test_ha-908878-m04_ha-908878.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878 "sudo cat /home/docker/cp-test_ha-908878-m04_ha-908878.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt ha-908878-m02:/home/docker/cp-test_ha-908878-m04_ha-908878-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m02 "sudo cat /home/docker/cp-test_ha-908878-m04_ha-908878-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 cp ha-908878-m04:/home/docker/cp-test.txt ha-908878-m03:/home/docker/cp-test_ha-908878-m04_ha-908878-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 ssh -n ha-908878-m03 "sudo cat /home/docker/cp-test_ha-908878-m04_ha-908878-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0528 20:45:21.294730   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.489179885s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 node delete m03 -v=7 --alsologtostderr
E0528 20:52:37.451330   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-908878 node delete m03 -v=7 --alsologtostderr: (17.194492804s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (351.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-908878 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0528 20:57:37.451243   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:59:00.496083   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 20:59:42.598208   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-908878 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m50.55433235s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (351.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-908878 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-908878 --control-plane -v=7 --alsologtostderr: (1m9.619819825s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-908878 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-528028 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0528 21:02:37.451364   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-528028 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.598609013s)
--- PASS: TestJSONOutput/start/Command (60.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-528028 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-528028 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-528028 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-528028 --output=json --user=testUser: (9.348451706s)
--- PASS: TestJSONOutput/stop/Command (9.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-091566 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-091566 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.532632ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6d3825cb-e8b4-46ed-8b83-aed67f2c16a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-091566] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ee1aa7e-74a9-4445-9978-ec9660ab45ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18966"}}
	{"specversion":"1.0","id":"91d2fb85-f913-4545-b10f-c9718c3b5b2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"05ab2c4d-a1e1-40dd-8d82-f09e17bb1eab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig"}}
	{"specversion":"1.0","id":"2f5f254c-bc60-4a9d-9bc0-87b1fef49d88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube"}}
	{"specversion":"1.0","id":"fb19cf4b-3d8f-43a5-bca3-da3f7750d6d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ccfb772c-9064-4ad6-983c-d2594ac30d2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3e8380c9-a0e5-43e3-8fe4-9b0aeddc96b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-091566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-091566
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (83.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-479008 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-479008 --driver=kvm2  --container-runtime=crio: (43.115080202s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-482043 --driver=kvm2  --container-runtime=crio
E0528 21:04:42.597938   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-482043 --driver=kvm2  --container-runtime=crio: (38.389311848s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-479008
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-482043
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-482043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-482043
helpers_test.go:175: Cleaning up "first-479008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-479008
--- PASS: TestMinikubeProfile (83.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-910173 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-910173 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.823682962s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-910173 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-910173 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-926310 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-926310 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.357942097s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-926310 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-926310 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-910173 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-926310 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-926310 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-926310
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-926310: (1.262170504s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-926310
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-926310: (23.518523051s)
--- PASS: TestMountStart/serial/RestartStopped (24.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-926310 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-926310 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (97.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-869191 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0528 21:07:37.451519   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 21:07:45.645509   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-869191 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m36.981069937s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (97.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-869191 -- rollout status deployment/busybox: (4.270480369s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- exec busybox-fc5497c4f-gxpk7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- exec busybox-fc5497c4f-qqxb7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- exec busybox-fc5497c4f-gxpk7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- exec busybox-fc5497c4f-qqxb7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- exec busybox-fc5497c4f-gxpk7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- exec busybox-fc5497c4f-qqxb7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- exec busybox-fc5497c4f-gxpk7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- exec busybox-fc5497c4f-gxpk7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- exec busybox-fc5497c4f-qqxb7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-869191 -- exec busybox-fc5497c4f-qqxb7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-869191 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-869191 -v 3 --alsologtostderr: (41.944876824s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.49s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-869191 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 cp testdata/cp-test.txt multinode-869191:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 cp multinode-869191:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2076892289/001/cp-test_multinode-869191.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 cp multinode-869191:/home/docker/cp-test.txt multinode-869191-m02:/home/docker/cp-test_multinode-869191_multinode-869191-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191-m02 "sudo cat /home/docker/cp-test_multinode-869191_multinode-869191-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 cp multinode-869191:/home/docker/cp-test.txt multinode-869191-m03:/home/docker/cp-test_multinode-869191_multinode-869191-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191-m03 "sudo cat /home/docker/cp-test_multinode-869191_multinode-869191-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 cp testdata/cp-test.txt multinode-869191-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 cp multinode-869191-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2076892289/001/cp-test_multinode-869191-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 cp multinode-869191-m02:/home/docker/cp-test.txt multinode-869191:/home/docker/cp-test_multinode-869191-m02_multinode-869191.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191 "sudo cat /home/docker/cp-test_multinode-869191-m02_multinode-869191.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 cp multinode-869191-m02:/home/docker/cp-test.txt multinode-869191-m03:/home/docker/cp-test_multinode-869191-m02_multinode-869191-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191-m03 "sudo cat /home/docker/cp-test_multinode-869191-m02_multinode-869191-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 cp testdata/cp-test.txt multinode-869191-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 cp multinode-869191-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2076892289/001/cp-test_multinode-869191-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 cp multinode-869191-m03:/home/docker/cp-test.txt multinode-869191:/home/docker/cp-test_multinode-869191-m03_multinode-869191.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191 "sudo cat /home/docker/cp-test_multinode-869191-m03_multinode-869191.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 cp multinode-869191-m03:/home/docker/cp-test.txt multinode-869191-m02:/home/docker/cp-test_multinode-869191-m03_multinode-869191-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 ssh -n multinode-869191-m02 "sudo cat /home/docker/cp-test_multinode-869191-m03_multinode-869191-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-869191 node stop m03: (1.47478095s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-869191 status: exit status 7 (407.230949ms)

                                                
                                                
-- stdout --
	multinode-869191
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-869191-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-869191-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-869191 status --alsologtostderr: exit status 7 (411.537496ms)

                                                
                                                
-- stdout --
	multinode-869191
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-869191-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-869191-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:08:46.465340   39422 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:08:46.465574   39422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:08:46.465583   39422 out.go:304] Setting ErrFile to fd 2...
	I0528 21:08:46.465593   39422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:08:46.465788   39422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:08:46.465949   39422 out.go:298] Setting JSON to false
	I0528 21:08:46.465976   39422 mustload.go:65] Loading cluster: multinode-869191
	I0528 21:08:46.466082   39422 notify.go:220] Checking for updates...
	I0528 21:08:46.466328   39422 config.go:182] Loaded profile config "multinode-869191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:08:46.466342   39422 status.go:255] checking status of multinode-869191 ...
	I0528 21:08:46.466724   39422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:08:46.466779   39422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:08:46.486206   39422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39121
	I0528 21:08:46.486620   39422 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:08:46.487156   39422 main.go:141] libmachine: Using API Version  1
	I0528 21:08:46.487179   39422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:08:46.487481   39422 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:08:46.487663   39422 main.go:141] libmachine: (multinode-869191) Calling .GetState
	I0528 21:08:46.489156   39422 status.go:330] multinode-869191 host status = "Running" (err=<nil>)
	I0528 21:08:46.489170   39422 host.go:66] Checking if "multinode-869191" exists ...
	I0528 21:08:46.489445   39422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:08:46.489484   39422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:08:46.504519   39422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37845
	I0528 21:08:46.504861   39422 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:08:46.505301   39422 main.go:141] libmachine: Using API Version  1
	I0528 21:08:46.505323   39422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:08:46.505644   39422 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:08:46.505818   39422 main.go:141] libmachine: (multinode-869191) Calling .GetIP
	I0528 21:08:46.508182   39422 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:08:46.508511   39422 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:08:46.508550   39422 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:08:46.508690   39422 host.go:66] Checking if "multinode-869191" exists ...
	I0528 21:08:46.509017   39422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:08:46.509068   39422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:08:46.523329   39422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33171
	I0528 21:08:46.523703   39422 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:08:46.524135   39422 main.go:141] libmachine: Using API Version  1
	I0528 21:08:46.524159   39422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:08:46.524460   39422 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:08:46.524611   39422 main.go:141] libmachine: (multinode-869191) Calling .DriverName
	I0528 21:08:46.524789   39422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 21:08:46.524824   39422 main.go:141] libmachine: (multinode-869191) Calling .GetSSHHostname
	I0528 21:08:46.527339   39422 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:08:46.527740   39422 main.go:141] libmachine: (multinode-869191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:1a:11", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:06:25 +0000 UTC Type:0 Mac:52:54:00:4e:1a:11 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-869191 Clientid:01:52:54:00:4e:1a:11}
	I0528 21:08:46.527771   39422 main.go:141] libmachine: (multinode-869191) DBG | domain multinode-869191 has defined IP address 192.168.39.65 and MAC address 52:54:00:4e:1a:11 in network mk-multinode-869191
	I0528 21:08:46.527900   39422 main.go:141] libmachine: (multinode-869191) Calling .GetSSHPort
	I0528 21:08:46.528083   39422 main.go:141] libmachine: (multinode-869191) Calling .GetSSHKeyPath
	I0528 21:08:46.528246   39422 main.go:141] libmachine: (multinode-869191) Calling .GetSSHUsername
	I0528 21:08:46.528403   39422 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/multinode-869191/id_rsa Username:docker}
	I0528 21:08:46.612833   39422 ssh_runner.go:195] Run: systemctl --version
	I0528 21:08:46.619737   39422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:08:46.634386   39422 kubeconfig.go:125] found "multinode-869191" server: "https://192.168.39.65:8443"
	I0528 21:08:46.634407   39422 api_server.go:166] Checking apiserver status ...
	I0528 21:08:46.634440   39422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:08:46.649688   39422 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	W0528 21:08:46.659586   39422 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:08:46.659637   39422 ssh_runner.go:195] Run: ls
	I0528 21:08:46.664774   39422 api_server.go:253] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I0528 21:08:46.668889   39422 api_server.go:279] https://192.168.39.65:8443/healthz returned 200:
	ok
	I0528 21:08:46.668908   39422 status.go:422] multinode-869191 apiserver status = Running (err=<nil>)
	I0528 21:08:46.668918   39422 status.go:257] multinode-869191 status: &{Name:multinode-869191 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:08:46.668931   39422 status.go:255] checking status of multinode-869191-m02 ...
	I0528 21:08:46.669240   39422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:08:46.669286   39422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:08:46.684139   39422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34451
	I0528 21:08:46.684482   39422 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:08:46.684870   39422 main.go:141] libmachine: Using API Version  1
	I0528 21:08:46.684890   39422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:08:46.685180   39422 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:08:46.685334   39422 main.go:141] libmachine: (multinode-869191-m02) Calling .GetState
	I0528 21:08:46.686715   39422 status.go:330] multinode-869191-m02 host status = "Running" (err=<nil>)
	I0528 21:08:46.686732   39422 host.go:66] Checking if "multinode-869191-m02" exists ...
	I0528 21:08:46.687011   39422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:08:46.687045   39422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:08:46.701410   39422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36093
	I0528 21:08:46.701747   39422 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:08:46.702190   39422 main.go:141] libmachine: Using API Version  1
	I0528 21:08:46.702226   39422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:08:46.702519   39422 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:08:46.702677   39422 main.go:141] libmachine: (multinode-869191-m02) Calling .GetIP
	I0528 21:08:46.705266   39422 main.go:141] libmachine: (multinode-869191-m02) DBG | domain multinode-869191-m02 has defined MAC address 52:54:00:1d:9c:31 in network mk-multinode-869191
	I0528 21:08:46.705646   39422 main.go:141] libmachine: (multinode-869191-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:9c:31", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:07:25 +0000 UTC Type:0 Mac:52:54:00:1d:9c:31 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:multinode-869191-m02 Clientid:01:52:54:00:1d:9c:31}
	I0528 21:08:46.705676   39422 main.go:141] libmachine: (multinode-869191-m02) DBG | domain multinode-869191-m02 has defined IP address 192.168.39.98 and MAC address 52:54:00:1d:9c:31 in network mk-multinode-869191
	I0528 21:08:46.705795   39422 host.go:66] Checking if "multinode-869191-m02" exists ...
	I0528 21:08:46.706085   39422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:08:46.706118   39422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:08:46.720697   39422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34467
	I0528 21:08:46.721109   39422 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:08:46.721546   39422 main.go:141] libmachine: Using API Version  1
	I0528 21:08:46.721564   39422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:08:46.721870   39422 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:08:46.722042   39422 main.go:141] libmachine: (multinode-869191-m02) Calling .DriverName
	I0528 21:08:46.722237   39422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 21:08:46.722262   39422 main.go:141] libmachine: (multinode-869191-m02) Calling .GetSSHHostname
	I0528 21:08:46.725168   39422 main.go:141] libmachine: (multinode-869191-m02) DBG | domain multinode-869191-m02 has defined MAC address 52:54:00:1d:9c:31 in network mk-multinode-869191
	I0528 21:08:46.725553   39422 main.go:141] libmachine: (multinode-869191-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:9c:31", ip: ""} in network mk-multinode-869191: {Iface:virbr1 ExpiryTime:2024-05-28 22:07:25 +0000 UTC Type:0 Mac:52:54:00:1d:9c:31 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:multinode-869191-m02 Clientid:01:52:54:00:1d:9c:31}
	I0528 21:08:46.725592   39422 main.go:141] libmachine: (multinode-869191-m02) DBG | domain multinode-869191-m02 has defined IP address 192.168.39.98 and MAC address 52:54:00:1d:9c:31 in network mk-multinode-869191
	I0528 21:08:46.725732   39422 main.go:141] libmachine: (multinode-869191-m02) Calling .GetSSHPort
	I0528 21:08:46.725901   39422 main.go:141] libmachine: (multinode-869191-m02) Calling .GetSSHKeyPath
	I0528 21:08:46.726104   39422 main.go:141] libmachine: (multinode-869191-m02) Calling .GetSSHUsername
	I0528 21:08:46.726259   39422 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18966-3963/.minikube/machines/multinode-869191-m02/id_rsa Username:docker}
	I0528 21:08:46.805119   39422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:08:46.818827   39422 status.go:257] multinode-869191-m02 status: &{Name:multinode-869191-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:08:46.818887   39422 status.go:255] checking status of multinode-869191-m03 ...
	I0528 21:08:46.819221   39422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0528 21:08:46.819267   39422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0528 21:08:46.834055   39422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42715
	I0528 21:08:46.834418   39422 main.go:141] libmachine: () Calling .GetVersion
	I0528 21:08:46.834781   39422 main.go:141] libmachine: Using API Version  1
	I0528 21:08:46.834801   39422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0528 21:08:46.835071   39422 main.go:141] libmachine: () Calling .GetMachineName
	I0528 21:08:46.835260   39422 main.go:141] libmachine: (multinode-869191-m03) Calling .GetState
	I0528 21:08:46.836617   39422 status.go:330] multinode-869191-m03 host status = "Stopped" (err=<nil>)
	I0528 21:08:46.836628   39422 status.go:343] host is not running, skipping remaining checks
	I0528 21:08:46.836633   39422 status.go:257] multinode-869191-m03 status: &{Name:multinode-869191-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-869191 node start m03 -v=7 --alsologtostderr: (27.893387482s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-869191 node delete m03: (1.89937186s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (186.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-869191 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0528 21:17:37.450852   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 21:19:42.598413   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-869191 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m5.525379721s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-869191 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (186.07s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-869191
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-869191-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-869191-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.447625ms)

                                                
                                                
-- stdout --
	* [multinode-869191-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-869191-m02' is duplicated with machine name 'multinode-869191-m02' in profile 'multinode-869191'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-869191-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-869191-m03 --driver=kvm2  --container-runtime=crio: (41.139813851s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-869191
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-869191: exit status 80 (211.292912ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-869191 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-869191-m03 already exists in multinode-869191-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-869191-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.20s)

                                                
                                    
x
+
TestScheduledStopUnix (114.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-516261 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-516261 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.642552459s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-516261 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-516261 -n scheduled-stop-516261
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-516261 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-516261 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-516261 -n scheduled-stop-516261
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-516261
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-516261 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-516261
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-516261: exit status 7 (62.247614ms)

                                                
                                                
-- stdout --
	scheduled-stop-516261
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-516261 -n scheduled-stop-516261
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-516261 -n scheduled-stop-516261: exit status 7 (60.269634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-516261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-516261
--- PASS: TestScheduledStopUnix (114.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (225.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1551683214 start -p running-upgrade-185653 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0528 21:27:37.451285   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1551683214 start -p running-upgrade-185653 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m8.991682766s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-185653 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-185653 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.825030557s)
helpers_test.go:175: Cleaning up "running-upgrade-185653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-185653
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-185653: (1.368561884s)
--- PASS: TestRunningBinaryUpgrade (225.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187083 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-187083 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (70.046674ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-187083] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187083 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-187083 --driver=kvm2  --container-runtime=crio: (1m34.247273111s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-187083 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (71.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187083 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-187083 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m10.386224261s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-187083 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-187083 status -o json: exit status 2 (235.669128ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-187083","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-187083
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (71.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (52.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187083 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-187083 --no-kubernetes --driver=kvm2  --container-runtime=crio: (52.146247186s)
--- PASS: TestNoKubernetes/serial/Start (52.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-110727 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-110727 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (99.351025ms)

                                                
                                                
-- stdout --
	* [false-110727] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:30:13.235989   50669 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:30:13.236237   50669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:30:13.236246   50669 out.go:304] Setting ErrFile to fd 2...
	I0528 21:30:13.236250   50669 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:30:13.236446   50669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-3963/.minikube/bin
	I0528 21:30:13.236949   50669 out.go:298] Setting JSON to false
	I0528 21:30:13.237964   50669 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4356,"bootTime":1716927457,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0528 21:30:13.238026   50669 start.go:139] virtualization: kvm guest
	I0528 21:30:13.240253   50669 out.go:177] * [false-110727] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0528 21:30:13.241447   50669 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:30:13.242554   50669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:30:13.241462   50669 notify.go:220] Checking for updates...
	I0528 21:30:13.244771   50669 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-3963/kubeconfig
	I0528 21:30:13.246025   50669 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-3963/.minikube
	I0528 21:30:13.247284   50669 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0528 21:30:13.248501   50669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:30:13.250084   50669 config.go:182] Loaded profile config "NoKubernetes-187083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0528 21:30:13.250176   50669 config.go:182] Loaded profile config "cert-expiration-257793": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0528 21:30:13.250263   50669 config.go:182] Loaded profile config "running-upgrade-185653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0528 21:30:13.250352   50669 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:30:13.288484   50669 out.go:177] * Using the kvm2 driver based on user configuration
	I0528 21:30:13.289640   50669 start.go:297] selected driver: kvm2
	I0528 21:30:13.289652   50669 start.go:901] validating driver "kvm2" against <nil>
	I0528 21:30:13.289667   50669 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:30:13.291696   50669 out.go:177] 
	W0528 21:30:13.292954   50669 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0528 21:30:13.294207   50669 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-110727 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-110727

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-110727

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-110727

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-110727

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-110727

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-110727

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-110727

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-110727

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-110727

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-110727

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-110727

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-110727" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-110727" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 May 2024 21:29:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.246:8443
name: cert-expiration-257793
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 May 2024 21:29:44 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.233:8443
name: running-upgrade-185653
contexts:
- context:
cluster: cert-expiration-257793
extensions:
- extension:
last-update: Tue, 28 May 2024 21:29:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-257793
name: cert-expiration-257793
- context:
cluster: running-upgrade-185653
extensions:
- extension:
last-update: Tue, 28 May 2024 21:29:44 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-185653
name: running-upgrade-185653
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-257793
user:
client-certificate: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/cert-expiration-257793/client.crt
client-key: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/cert-expiration-257793/client.key
- name: running-upgrade-185653
user:
client-certificate: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/running-upgrade-185653/client.crt
client-key: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/running-upgrade-185653/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-110727

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110727"

                                                
                                                
----------------------- debugLogs end: false-110727 [took: 2.679672156s] --------------------------------
helpers_test.go:175: Cleaning up "false-110727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-110727
--- PASS: TestNetworkPlugins/group/false (2.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-187083 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-187083 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.283599ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.878048038s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-187083
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-187083: (2.590197224s)
--- PASS: TestNoKubernetes/serial/Stop (2.59s)

                                                
                                    
x
+
TestPause/serial/Start (96.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-547166 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-547166 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m36.053499959s)
--- PASS: TestPause/serial/Start (96.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (69.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-187083 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-187083 --driver=kvm2  --container-runtime=crio: (1m9.975285426s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (69.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-187083 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-187083 "sudo systemctl is-active --quiet service kubelet": exit status 1 (183.804193ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (96.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1142854076 start -p stopped-upgrade-742900 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1142854076 start -p stopped-upgrade-742900 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (49.410093588s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1142854076 -p stopped-upgrade-742900 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1142854076 -p stopped-upgrade-742900 stop: (2.128439286s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-742900 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-742900 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.818225827s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (96.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m40.311345399s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-742900
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m14.694563203s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6lxwm" [97d0c556-46e7-484f-bfa4-ffee3d3b0c5e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005173964s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-110727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-110727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rqlmc" [dd4413fc-0697-4750-9a0a-0f2018b67e9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0528 21:34:42.598293   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-rqlmc" [dd4413fc-0697-4750-9a0a-0f2018b67e9f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.066947388s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-110727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-110727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qhf94" [860215a7-df8c-4cb9-b203-f01cbe536ae4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qhf94" [860215a7-df8c-4cb9-b203-f01cbe536ae4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004389088s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-110727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-110727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (91.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m31.677033668s)
--- PASS: TestNetworkPlugins/group/calico/Start (91.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (102.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m42.327617072s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (102.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xcx2c" [8f5ec468-e3ce-491e-ba79-4516ad0abd94] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006793436s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-110727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-110727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-76wr6" [0e1e95b5-390c-46d0-bc70-dc6df5b94d3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-76wr6" [0e1e95b5-390c-46d0-bc70-dc6df5b94d3f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.011223383s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-110727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (86.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m26.053942765s)
--- PASS: TestNetworkPlugins/group/flannel/Start (86.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-110727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-110727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2d2rm" [1796cadc-118e-4663-936f-17da732bfc82] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2d2rm" [1796cadc-118e-4663-936f-17da732bfc82] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004341462s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-110727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (114.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m54.019438787s)
--- PASS: TestNetworkPlugins/group/bridge/Start (114.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (123.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0528 21:37:37.451013   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-110727 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m3.341252834s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (123.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-sm6qf" [d06f8217-a65f-44dd-91da-91d02d2c6a8d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004605809s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-110727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-110727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9jkxl" [8428e192-a69d-47fd-a8ed-e6e417337a74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9jkxl" [8428e192-a69d-47fd-a8ed-e6e417337a74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004306836s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-110727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-110727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-110727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gr8l9" [997968f9-e0e4-4509-8f6d-275e397c9c1c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gr8l9" [997968f9-e0e4-4509-8f6d-275e397c9c1c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003687947s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-110727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-110727 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-110727 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7mhc4" [411c605b-f427-46ad-b393-99522383812d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7mhc4" [411c605b-f427-46ad-b393-99522383812d] Running
E0528 21:39:35.201845   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:39:37.762069   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.004394191s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (111.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-290122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0528 21:39:32.641599   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:39:32.647333   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:39:32.657626   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:39:32.677945   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:39:32.718237   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:39:32.798588   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:39:32.959425   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:39:33.280195   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:39:33.921138   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-290122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (1m51.187263737s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (111.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-110727 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-110727 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)
E0528 22:09:42.598243   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 22:09:45.763172   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (97.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-595279 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0528 21:39:56.003689   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory
E0528 21:40:06.244315   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory
E0528 21:40:13.603953   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:40:26.725097   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory
E0528 21:40:54.564345   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:41:05.646458   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 21:41:07.685608   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-595279 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (1m37.295142704s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (97.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-290122 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9b912c7e-7dc0-406d-934e-56f8c76293b4] Pending
helpers_test.go:344: "busybox" [9b912c7e-7dc0-406d-934e-56f8c76293b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9b912c7e-7dc0-406d-934e-56f8c76293b4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004711566s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-290122 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-595279 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1b75037d-627f-4727-8935-8b459c226fe7] Pending
helpers_test.go:344: "busybox" [1b75037d-627f-4727-8935-8b459c226fe7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1b75037d-627f-4727-8935-8b459c226fe7] Running
E0528 21:41:36.131972   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:41:36.137240   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:41:36.147500   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:41:36.167757   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:41:36.208032   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:41:36.288398   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:41:36.448811   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:41:36.769429   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:41:37.410343   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:41:38.690516   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004129802s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-595279 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-290122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-290122 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-595279 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0528 21:41:41.251186   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-595279 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (651.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-290122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0528 21:44:04.198712   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:44:04.218936   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:44:04.259475   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:44:04.340201   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:44:04.501146   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:44:04.821690   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:44:05.462849   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:44:06.743694   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:44:09.304662   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-290122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (10m50.878837127s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-290122 -n no-preload-290122
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (651.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (568.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-595279 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0528 21:44:14.425816   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:44:19.973880   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:44:24.666801   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:44:25.052462   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:44:25.057775   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:44:25.067974   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:44:25.088224   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:44:25.128472   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:44:25.208826   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:44:25.369381   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:44:25.690332   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:44:26.330811   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:44:27.611368   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:44:30.171896   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:44:32.641267   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:44:35.292256   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:44:39.180683   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:44:42.376025   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:44:42.597700   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 21:44:45.147806   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:44:45.532505   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:44:45.763966   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory
E0528 21:45:00.325550   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-595279 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (9m28.504926489s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-595279 -n embed-certs-595279
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (568.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-499466 --alsologtostderr -v=3
E0528 21:45:06.012806   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-499466 --alsologtostderr -v=3: (4.532071955s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-499466 -n old-k8s-version-499466: exit status 7 (65.881637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-499466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (237.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-249165 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0528 21:47:03.814823   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/calico-110727/client.crt: no such file or directory
E0528 21:47:08.894066   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:47:23.021382   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/custom-flannel-110727/client.crt: no such file or directory
E0528 21:47:37.451003   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 21:48:20.453702   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:48:48.137591   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/flannel-110727/client.crt: no such file or directory
E0528 21:49:00.498588   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/functional-193928/client.crt: no such file or directory
E0528 21:49:04.181651   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:49:25.051716   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
E0528 21:49:31.869040   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 21:49:32.641442   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/kindnet-110727/client.crt: no such file or directory
E0528 21:49:42.597636   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/addons-307023/client.crt: no such file or directory
E0528 21:49:45.764019   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/auto-110727/client.crt: no such file or directory
E0528 21:49:52.735264   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-249165 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (3m57.758548549s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (237.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-249165 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [63d28de3-5b73-43f3-bde4-2528677ee385] Pending
helpers_test.go:344: "busybox" [63d28de3-5b73-43f3-bde4-2528677ee385] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [63d28de3-5b73-43f3-bde4-2528677ee385] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003647414s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-249165 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-249165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-249165 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (619.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-249165 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-249165 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (10m19.615661072s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-249165 -n default-k8s-diff-port-249165
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (619.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (57.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-588598 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0528 22:09:04.182325   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/bridge-110727/client.crt: no such file or directory
E0528 22:09:25.052437   11760 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/enable-default-cni-110727/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-588598 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (57.396144827s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (57.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-588598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-588598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.139382359s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-588598 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-588598 --alsologtostderr -v=3: (7.577375369s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-588598 -n newest-cni-588598
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-588598 -n newest-cni-588598: exit status 7 (65.318168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-588598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-588598 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-588598 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (34.470492585s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-588598 -n newest-cni-588598
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-588598 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-588598 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-588598 -n newest-cni-588598
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-588598 -n newest-cni-588598: exit status 2 (229.033403ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-588598 -n newest-cni-588598
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-588598 -n newest-cni-588598: exit status 2 (223.182824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-588598 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-588598 -n newest-cni-588598
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-588598 -n newest-cni-588598
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.41s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.1/cached-images 0
15 TestDownloadOnly/v1.30.1/binaries 0
16 TestDownloadOnly/v1.30.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
41 TestAddons/parallel/Volcano 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
144 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 2.66
265 TestNetworkPlugins/group/cilium 3.59
274 TestStartStop/group/disable-driver-mounts 0.13
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-110727 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-110727

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-110727

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-110727

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-110727

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-110727

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-110727

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-110727

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-110727

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-110727

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-110727

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-110727

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-110727" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-110727" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 May 2024 21:29:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.246:8443
name: cert-expiration-257793
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 May 2024 21:29:44 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.233:8443
name: running-upgrade-185653
contexts:
- context:
cluster: cert-expiration-257793
extensions:
- extension:
last-update: Tue, 28 May 2024 21:29:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-257793
name: cert-expiration-257793
- context:
cluster: running-upgrade-185653
extensions:
- extension:
last-update: Tue, 28 May 2024 21:29:44 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-185653
name: running-upgrade-185653
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-257793
user:
client-certificate: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/cert-expiration-257793/client.crt
client-key: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/cert-expiration-257793/client.key
- name: running-upgrade-185653
user:
client-certificate: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/running-upgrade-185653/client.crt
client-key: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/running-upgrade-185653/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-110727

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110727"

                                                
                                                
----------------------- debugLogs end: kubenet-110727 [took: 2.521814864s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-110727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-110727
--- SKIP: TestNetworkPlugins/group/kubenet (2.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-110727 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-110727" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 May 2024 21:29:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.246:8443
name: cert-expiration-257793
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18966-3963/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 May 2024 21:29:44 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.233:8443
name: running-upgrade-185653
contexts:
- context:
cluster: cert-expiration-257793
extensions:
- extension:
last-update: Tue, 28 May 2024 21:29:32 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-257793
name: cert-expiration-257793
- context:
cluster: running-upgrade-185653
extensions:
- extension:
last-update: Tue, 28 May 2024 21:29:44 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-185653
name: running-upgrade-185653
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-257793
user:
client-certificate: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/cert-expiration-257793/client.crt
client-key: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/cert-expiration-257793/client.key
- name: running-upgrade-185653
user:
client-certificate: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/running-upgrade-185653/client.crt
client-key: /home/jenkins/minikube-integration/18966-3963/.minikube/profiles/running-upgrade-185653/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-110727

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-110727" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110727"

                                                
                                                
----------------------- debugLogs end: cilium-110727 [took: 3.448436578s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-110727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-110727
--- SKIP: TestNetworkPlugins/group/cilium (3.59s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-807140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-807140
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
Copied to clipboard